Feb 14 10:41:28 crc systemd[1]: Starting Kubernetes Kubelet... Feb 14 10:41:28 crc restorecon[4589]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 10:41:29 crc restorecon[4589]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 10:41:30 crc kubenswrapper[4736]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.111859 4736 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118031 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118060 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118092 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118104 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118112 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118121 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118129 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118138 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118146 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118154 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118162 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118171 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118179 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118190 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118202 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118212 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118221 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118230 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118239 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118247 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118255 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118264 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118272 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118280 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118289 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118300 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118309 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118318 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118326 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118335 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118343 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118351 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118359 4736 feature_gate.go:330] unrecognized feature gate: Example Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118368 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118376 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118384 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118392 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118400 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118410 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118419 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118430 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118439 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118450 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118459 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118468 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118476 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118486 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118496 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118507 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118519 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118528 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118537 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118545 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118555 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118563 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118573 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118581 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118590 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118600 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118609 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118617 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118625 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118633 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118642 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118650 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118658 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118667 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118676 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118684 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118692 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.118701 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120757 4736 flags.go:64] FLAG: --address="0.0.0.0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120802 4736 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120819 4736 flags.go:64] FLAG: --anonymous-auth="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120832 4736 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120845 4736 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120854 4736 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120867 4736 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120878 4736 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120888 4736 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120898 4736 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120909 4736 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120919 4736 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120928 4736 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120938 4736 flags.go:64] FLAG: --cgroup-root="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120948 4736 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120958 4736 flags.go:64] FLAG: --client-ca-file="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120967 4736 flags.go:64] FLAG: --cloud-config="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120977 4736 flags.go:64] FLAG: --cloud-provider="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120985 4736 flags.go:64] FLAG: --cluster-dns="[]" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.120997 4736 flags.go:64] FLAG: --cluster-domain="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121006 4736 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121016 4736 flags.go:64] FLAG: --config-dir="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121026 4736 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121036 4736 flags.go:64] FLAG: --container-log-max-files="5" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121048 4736 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121058 4736 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121068 4736 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121078 4736 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121088 4736 flags.go:64] FLAG: --contention-profiling="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121097 4736 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121108 4736 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121118 4736 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121127 4736 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121139 4736 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121148 4736 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121158 4736 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121168 4736 flags.go:64] FLAG: --enable-load-reader="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121177 4736 flags.go:64] FLAG: --enable-server="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121186 4736 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121199 4736 flags.go:64] FLAG: --event-burst="100" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121209 4736 flags.go:64] FLAG: --event-qps="50" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121219 4736 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121228 4736 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121238 4736 flags.go:64] FLAG: --eviction-hard="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121249 4736 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121259 4736 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121269 4736 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121279 4736 flags.go:64] FLAG: --eviction-soft="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121289 4736 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121298 4736 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121308 4736 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121317 4736 flags.go:64] FLAG: --experimental-mounter-path="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121327 4736 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121337 4736 flags.go:64] FLAG: --fail-swap-on="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121348 4736 flags.go:64] FLAG: --feature-gates="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121359 4736 flags.go:64] FLAG: --file-check-frequency="20s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121369 4736 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121379 4736 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121389 4736 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121399 4736 flags.go:64] FLAG: --healthz-port="10248" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121409 4736 flags.go:64] FLAG: --help="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121419 4736 flags.go:64] FLAG: --hostname-override="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121428 4736 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121438 4736 flags.go:64] FLAG: --http-check-frequency="20s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121448 4736 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121457 4736 flags.go:64] FLAG: --image-credential-provider-config="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121467 4736 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121476 4736 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121486 4736 flags.go:64] FLAG: --image-service-endpoint="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121495 4736 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121504 4736 flags.go:64] FLAG: --kube-api-burst="100" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121514 4736 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121524 4736 flags.go:64] FLAG: --kube-api-qps="50" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121533 4736 flags.go:64] FLAG: --kube-reserved="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121544 4736 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121554 4736 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121564 4736 flags.go:64] FLAG: --kubelet-cgroups="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121573 4736 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121583 4736 flags.go:64] FLAG: --lock-file="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121592 4736 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121601 4736 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121611 4736 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121625 4736 flags.go:64] FLAG: --log-json-split-stream="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121635 4736 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121644 4736 flags.go:64] FLAG: --log-text-split-stream="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121654 4736 flags.go:64] FLAG: --logging-format="text" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121664 4736 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121675 4736 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121684 4736 flags.go:64] FLAG: --manifest-url="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121693 4736 flags.go:64] FLAG: --manifest-url-header="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121705 4736 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121715 4736 flags.go:64] FLAG: --max-open-files="1000000" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121727 4736 flags.go:64] FLAG: --max-pods="110" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121737 4736 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121752 4736 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121801 4736 flags.go:64] FLAG: --memory-manager-policy="None" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121811 4736 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121821 4736 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121832 4736 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121841 4736 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121861 4736 flags.go:64] FLAG: --node-status-max-images="50" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121871 4736 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121881 4736 flags.go:64] FLAG: --oom-score-adj="-999" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121891 4736 flags.go:64] FLAG: --pod-cidr="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121900 4736 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121914 4736 flags.go:64] FLAG: --pod-manifest-path="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121923 4736 flags.go:64] FLAG: --pod-max-pids="-1" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121934 4736 flags.go:64] FLAG: --pods-per-core="0" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121944 4736 flags.go:64] FLAG: --port="10250" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121954 4736 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121964 4736 flags.go:64] FLAG: --provider-id="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121975 4736 flags.go:64] FLAG: --qos-reserved="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121986 4736 flags.go:64] FLAG: --read-only-port="10255" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.121997 4736 flags.go:64] FLAG: --register-node="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122007 4736 flags.go:64] FLAG: --register-schedulable="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122016 4736 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122031 4736 flags.go:64] FLAG: --registry-burst="10" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122041 4736 flags.go:64] FLAG: --registry-qps="5" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122051 4736 flags.go:64] FLAG: --reserved-cpus="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122061 4736 flags.go:64] FLAG: --reserved-memory="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122073 4736 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122083 4736 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122092 4736 flags.go:64] FLAG: --rotate-certificates="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122103 4736 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122112 4736 flags.go:64] FLAG: --runonce="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122122 4736 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122131 4736 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122141 4736 flags.go:64] FLAG: --seccomp-default="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122151 4736 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122160 4736 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122170 4736 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122180 4736 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122190 4736 flags.go:64] FLAG: --storage-driver-password="root" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122199 4736 flags.go:64] FLAG: --storage-driver-secure="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122209 4736 flags.go:64] FLAG: --storage-driver-table="stats" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122218 4736 flags.go:64] FLAG: --storage-driver-user="root" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122228 4736 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122237 4736 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122247 4736 flags.go:64] FLAG: --system-cgroups="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122257 4736 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122271 4736 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122281 4736 flags.go:64] FLAG: --tls-cert-file="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122290 4736 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122300 4736 flags.go:64] FLAG: --tls-min-version="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122310 4736 flags.go:64] FLAG: --tls-private-key-file="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122319 4736 flags.go:64] FLAG: --topology-manager-policy="none" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122328 4736 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122352 4736 flags.go:64] FLAG: --topology-manager-scope="container" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122362 4736 flags.go:64] FLAG: --v="2" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122384 4736 flags.go:64] FLAG: --version="false" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122395 4736 flags.go:64] FLAG: --vmodule="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122406 4736 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.122417 4736 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122624 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122634 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122644 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122654 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122664 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122675 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122686 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122695 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122704 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122713 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122721 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122730 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122738 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122752 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122784 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122796 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122807 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122816 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122825 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122833 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122841 4736 feature_gate.go:330] unrecognized feature gate: Example Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122850 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122858 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122870 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122880 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122889 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122898 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122908 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122917 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122927 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122937 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122945 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122954 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122962 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122971 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122981 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.122991 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123000 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123009 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123017 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123030 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123039 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123047 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123055 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123063 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123072 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123080 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123092 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123100 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123109 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123118 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123126 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123134 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123142 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123150 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123159 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123167 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123175 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123183 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123191 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123200 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123208 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123216 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123224 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123233 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123241 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123253 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123264 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123273 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123282 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.123292 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.124152 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.139291 4736 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.139342 4736 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139471 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139487 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139498 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139508 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139518 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139527 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139535 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139545 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139554 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139562 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139571 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139580 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139589 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139617 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139625 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139632 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139643 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139656 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139664 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139673 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139681 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139690 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139698 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139706 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139713 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139720 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139730 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139738 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139753 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139793 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139804 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139815 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139824 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139832 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139839 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139847 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139855 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139862 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139870 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139878 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139885 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139893 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139901 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139910 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139918 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139925 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139933 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139940 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139948 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139956 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139966 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139975 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139984 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.139993 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140001 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140010 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140017 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140025 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140033 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140041 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140049 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140057 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140064 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140071 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140079 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140087 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140094 4736 feature_gate.go:330] unrecognized feature gate: Example Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140102 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140109 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140117 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140125 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.140138 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140402 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140468 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140484 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140494 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140503 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140512 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140521 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140529 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140537 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140545 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140553 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140561 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140569 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140580 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140591 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140600 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140609 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140619 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140629 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140638 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140648 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140658 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140668 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140678 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140690 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140700 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140709 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140718 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140727 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140737 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140746 4736 feature_gate.go:330] unrecognized feature gate: Example Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140789 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140804 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140818 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140829 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140841 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140850 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140865 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140876 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140887 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140897 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140907 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140919 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140930 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140942 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140952 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140963 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140973 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140983 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.140994 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141004 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141015 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141024 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141034 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141045 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141054 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141062 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141069 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141077 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141085 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141092 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141104 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141114 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141123 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141132 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141143 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141151 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141159 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141168 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141177 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.141185 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.141197 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.142290 4736 server.go:940] "Client rotation is on, will bootstrap in background" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.147907 4736 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.148039 4736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.151006 4736 server.go:997] "Starting client certificate rotation" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.151057 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.153636 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-17 18:33:58.457602509 +0000 UTC Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.153834 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.178651 4736 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.182637 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.183278 4736 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.201076 4736 log.go:25] "Validated CRI v1 runtime API" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.244015 4736 log.go:25] "Validated CRI v1 image API" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.246259 4736 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.255579 4736 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-14-10-34-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.255783 4736 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.278066 4736 manager.go:217] Machine: {Timestamp:2026-02-14 10:41:30.274865904 +0000 UTC m=+0.643493332 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199468544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:cd5bc215-ecb6-489e-b52e-104c9081339f BootID:eaba9d57-0133-42a1-b586-0a2596194ba8 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599734272 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:11:fc Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:11:fc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:14:75 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:62:1c:3f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4e:d2:d4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:12:22:4c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:d6:d1:2e:32:b4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:21:d5:c7:ad:19 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199468544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.278684 4736 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.279022 4736 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.281834 4736 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.282249 4736 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.282417 4736 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.282918 4736 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.283035 4736 container_manager_linux.go:303] "Creating device plugin manager" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.283916 4736 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.284078 4736 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.284421 4736 state_mem.go:36] "Initialized new in-memory state store" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.284644 4736 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.289892 4736 kubelet.go:418] "Attempting to sync node with API server" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.290046 4736 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.290202 4736 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.290333 4736 kubelet.go:324] "Adding apiserver pod source" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.290482 4736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.296609 4736 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.297242 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.297265 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.297356 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.297389 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.298647 4736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.300079 4736 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301833 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301883 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301903 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301922 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301950 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301966 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.301982 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.302009 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.302061 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.302083 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.302122 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.302135 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.303944 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.304796 4736 server.go:1280] "Started kubelet" Feb 14 10:41:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.308385 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.308967 4736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.308926 4736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.309717 4736 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.311387 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.311437 4736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.313486 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:02:16.328351831 +0000 UTC Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.314532 4736 factory.go:55] Registering systemd factory Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.315561 4736 factory.go:221] Registration of the systemd container factory successfully Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.314802 4736 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.315714 4736 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.314819 4736 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.314868 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.313082 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189416dca07e8cc1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 10:41:30.304711873 +0000 UTC m=+0.673339271,LastTimestamp:2026-02-14 10:41:30.304711873 +0000 UTC m=+0.673339271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.317072 4736 factory.go:153] Registering CRI-O factory Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.317537 4736 factory.go:221] Registration of the crio container factory successfully Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.317723 4736 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.317895 4736 factory.go:103] Registering Raw factory Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.318599 4736 manager.go:1196] Started watching for new ooms in manager Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.324871 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.325214 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.325400 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.326191 4736 server.go:460] "Adding debug handlers to kubelet server" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.334495 4736 manager.go:319] Starting recovery of all containers Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340338 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340431 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340463 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340491 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340516 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340540 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340565 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340590 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340620 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340683 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340711 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340739 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340805 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340838 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340863 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340886 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340912 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340939 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340963 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.340989 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341014 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341039 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341063 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341124 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341156 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341182 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341214 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341242 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341269 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341292 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341315 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341337 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341374 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341398 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341421 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341447 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341472 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341495 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341520 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341550 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341574 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341597 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341622 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341647 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341720 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341757 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341913 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341940 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341966 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.341993 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342018 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342044 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342078 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342107 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342133 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342161 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342187 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342213 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342236 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342261 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342285 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342310 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342335 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342361 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342386 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342410 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342437 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342462 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342486 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342517 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342540 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342566 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342592 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342677 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342714 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342748 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342811 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342838 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342861 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342888 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342912 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342936 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342960 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.342983 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343008 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343033 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343059 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343097 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343123 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343150 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343173 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343199 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343222 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343247 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343274 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343299 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343323 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343346 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343371 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343396 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343420 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343454 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343479 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343506 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343542 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343572 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343600 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343627 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343652 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343677 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343705 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343731 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343795 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343824 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343850 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343873 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343896 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343920 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343942 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.343982 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344026 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344054 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344079 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344103 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344126 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344152 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344177 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344201 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344224 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344285 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344315 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344341 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344364 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344392 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344416 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344445 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344469 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344493 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344522 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344547 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344572 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344597 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344620 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344646 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344670 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344696 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344719 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.344744 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348055 4736 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348118 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348150 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348181 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348210 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348236 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348266 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348292 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348329 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348396 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348424 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348450 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348478 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348505 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348533 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348562 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348591 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348622 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348650 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348681 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348709 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348743 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348815 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348845 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348872 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348924 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348954 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.348984 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349010 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349038 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349065 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349094 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349125 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349154 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349184 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349216 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349245 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349274 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349303 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349330 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349357 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349387 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349417 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349444 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349471 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349497 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349525 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349552 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349579 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349606 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349634 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349664 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349691 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349722 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349793 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349824 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349850 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349877 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349903 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349928 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349956 4736 reconstruct.go:97] "Volume reconstruction finished" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.349974 4736 reconciler.go:26] "Reconciler: start to sync state" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.361737 4736 manager.go:324] Recovery completed Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.373406 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.376061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.376332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.376478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.378442 4736 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.378467 4736 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.378493 4736 state_mem.go:36] "Initialized new in-memory state store" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.392235 4736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.394272 4736 policy_none.go:49] "None policy: Start" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.395863 4736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.395909 4736 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.395934 4736 kubelet.go:2335] "Starting kubelet main sync loop" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.395997 4736 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.396974 4736 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.396997 4736 state_mem.go:35] "Initializing new in-memory state store" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.397926 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.398007 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.416167 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.460120 4736 manager.go:334] "Starting Device Plugin manager" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.460181 4736 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.460196 4736 server.go:79] "Starting device plugin registration server" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.460639 4736 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.460662 4736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.461069 4736 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.461154 4736 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.461169 4736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.477995 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.496308 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.496411 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.497390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.497431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.497447 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.497632 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498091 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498183 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498713 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498889 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.498940 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499339 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499355 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499363 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499936 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.499961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.500083 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.500261 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.500292 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501211 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501284 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501295 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501398 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501629 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501821 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.501852 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.507147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.507190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.507210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.508877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.508916 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.508929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.509130 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.509162 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.510424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.510453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.510463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.525795 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.551866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.551943 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.551999 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552045 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552093 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552128 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552196 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552262 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552298 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552329 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552364 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.552490 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.563070 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.564614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.564652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.564667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.564696 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.565199 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653686 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653783 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653888 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653920 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653978 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653963 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654020 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.653987 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654139 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654109 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654154 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654008 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654282 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654339 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654380 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654416 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654438 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654460 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654482 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654515 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654519 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654538 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.654679 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.766074 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.771515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.771584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.771636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.771673 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.772250 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.847714 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.857466 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.873522 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.891558 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: I0214 10:41:30.897586 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.907209 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f0838d765b58c27a57c956f8d5a8ae3eb122222333805e1af92f530d9561d11a WatchSource:0}: Error finding container f0838d765b58c27a57c956f8d5a8ae3eb122222333805e1af92f530d9561d11a: Status 404 returned error can't find the container with id f0838d765b58c27a57c956f8d5a8ae3eb122222333805e1af92f530d9561d11a Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.915867 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2cb5e8960d2b77210afce4580fc829fdf6ab508368db43e819651c74407dd5d4 WatchSource:0}: Error finding container 2cb5e8960d2b77210afce4580fc829fdf6ab508368db43e819651c74407dd5d4: Status 404 returned error can't find the container with id 2cb5e8960d2b77210afce4580fc829fdf6ab508368db43e819651c74407dd5d4 Feb 14 10:41:30 crc kubenswrapper[4736]: W0214 10:41:30.917718 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-21a5d06f738bfafd59b20adc71efdc624d13d8f3f16a86418493adc5b742bac0 WatchSource:0}: Error finding container 21a5d06f738bfafd59b20adc71efdc624d13d8f3f16a86418493adc5b742bac0: Status 404 returned error can't find the container with id 21a5d06f738bfafd59b20adc71efdc624d13d8f3f16a86418493adc5b742bac0 Feb 14 10:41:30 crc kubenswrapper[4736]: E0214 10:41:30.926693 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.172811 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.174773 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.174842 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.174858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.174910 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.175681 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.311405 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.314297 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:20:22.202376852 +0000 UTC Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.400912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21a5d06f738bfafd59b20adc71efdc624d13d8f3f16a86418493adc5b742bac0"} Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.401847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0838d765b58c27a57c956f8d5a8ae3eb122222333805e1af92f530d9561d11a"} Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.402990 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f40b18df0f5795d448c19acaecbf2a7caa2c34fe89fd105569610766dec005c3"} Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.403863 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a96484b0bfcaf6719d598170ca2f92e6840bca3926f2c0d072f2238849216e26"} Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.404784 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2cb5e8960d2b77210afce4580fc829fdf6ab508368db43e819651c74407dd5d4"} Feb 14 10:41:31 crc kubenswrapper[4736]: W0214 10:41:31.653878 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.654015 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.727946 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Feb 14 10:41:31 crc kubenswrapper[4736]: W0214 10:41:31.749119 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.749229 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:31 crc kubenswrapper[4736]: W0214 10:41:31.798098 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.798211 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:31 crc kubenswrapper[4736]: W0214 10:41:31.813447 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.813592 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.976582 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.978659 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.978706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.978722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:31 crc kubenswrapper[4736]: I0214 10:41:31.978782 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:31 crc kubenswrapper[4736]: E0214 10:41:31.981601 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.240262 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 10:41:32 crc kubenswrapper[4736]: E0214 10:41:32.241493 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.311685 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.314868 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:53:42.065600486 +0000 UTC Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.411297 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f" exitCode=0 Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.411426 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.411499 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.412666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.412698 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.412709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415263 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415840 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415851 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.415873 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.416519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.416553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.416568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.417378 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.417396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.417405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.418477 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee" exitCode=0 Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.418632 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.419122 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.419542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.419570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.419583 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.421001 4736 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665" exitCode=0 Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.421057 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.421128 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.421950 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.422017 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.422037 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.425023 4736 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48" exitCode=0 Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.425074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48"} Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.425139 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.426801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.426836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:32 crc kubenswrapper[4736]: I0214 10:41:32.426851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.310981 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.316168 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:12:09.748185057 +0000 UTC Feb 14 10:41:33 crc kubenswrapper[4736]: E0214 10:41:33.329051 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Feb 14 10:41:33 crc kubenswrapper[4736]: W0214 10:41:33.376408 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:33 crc kubenswrapper[4736]: E0214 10:41:33.376487 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.432187 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.432374 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.432448 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.432463 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.433437 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.433473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.433506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.436243 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.436301 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.436322 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.439060 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d" exitCode=0 Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.439127 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.439255 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.440071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.440092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.440129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.442682 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2"} Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.442711 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.442819 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.443928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.443945 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.443964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.443979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.443979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.444070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.581702 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.583859 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.583897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.583905 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.583930 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:33 crc kubenswrapper[4736]: E0214 10:41:33.584540 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Feb 14 10:41:33 crc kubenswrapper[4736]: I0214 10:41:33.869401 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.310969 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.317099 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:20:22.111555343 +0000 UTC Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.447744 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69" exitCode=0 Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.447795 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69"} Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.447929 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.449225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.449264 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.449277 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.465516 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.466288 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.466836 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064"} Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.466887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008"} Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.466961 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.467357 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.467394 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.468947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.468980 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.468992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469026 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469104 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.468953 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:34 crc kubenswrapper[4736]: I0214 10:41:34.469206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.317959 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:24:21.588285305 +0000 UTC Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.396275 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.472918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41"} Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.472997 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.473007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2"} Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.473024 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.473028 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315"} Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.473153 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.473175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593"} Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.474874 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.474920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.474966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.474979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.475053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:35 crc kubenswrapper[4736]: I0214 10:41:35.475072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.318458 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:54:37.532889337 +0000 UTC Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.365959 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.482324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466"} Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.482389 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.482414 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.483876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.636925 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.785344 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.786670 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.786712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.786725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.786776 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.869566 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 10:41:36 crc kubenswrapper[4736]: I0214 10:41:36.869652 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.318600 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:45:46.356389919 +0000 UTC Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.485083 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.485117 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486342 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486384 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486398 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.486568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:37 crc kubenswrapper[4736]: I0214 10:41:37.773859 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.319109 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:11:14.844146557 +0000 UTC Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.487263 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.488455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.488517 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.488541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.949457 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.949715 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.951111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.951157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:38 crc kubenswrapper[4736]: I0214 10:41:38.951176 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.198214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.198453 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.199960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.200010 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.200026 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.208660 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.208915 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.210239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.210295 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.210318 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:39 crc kubenswrapper[4736]: I0214 10:41:39.319954 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:59:54.187073651 +0000 UTC Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.178117 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.178320 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.180165 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.180224 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.180243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.186802 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.320473 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:50:14.646161721 +0000 UTC Feb 14 10:41:40 crc kubenswrapper[4736]: E0214 10:41:40.478982 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.492515 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.493989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.494047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:40 crc kubenswrapper[4736]: I0214 10:41:40.494069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.320910 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:45:06.217658825 +0000 UTC Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.753938 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.754237 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.755741 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.756001 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:41 crc kubenswrapper[4736]: I0214 10:41:41.756153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:42 crc kubenswrapper[4736]: I0214 10:41:42.321898 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:38:07.271579405 +0000 UTC Feb 14 10:41:43 crc kubenswrapper[4736]: I0214 10:41:43.322181 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:47:08.524405972 +0000 UTC Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.323149 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:45:33.167574253 +0000 UTC Feb 14 10:41:44 crc kubenswrapper[4736]: W0214 10:41:44.386937 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.387075 4736 trace.go:236] Trace[1076564090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 10:41:34.385) (total time: 10001ms): Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[1076564090]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:41:44.386) Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[1076564090]: [10.001823883s] [10.001823883s] END Feb 14 10:41:44 crc kubenswrapper[4736]: E0214 10:41:44.387114 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 14 10:41:44 crc kubenswrapper[4736]: W0214 10:41:44.392719 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.392867 4736 trace.go:236] Trace[1085755841]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 10:41:34.390) (total time: 10001ms): Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[1085755841]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:41:44.392) Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[1085755841]: [10.001865274s] [10.001865274s] END Feb 14 10:41:44 crc kubenswrapper[4736]: E0214 10:41:44.392920 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 14 10:41:44 crc kubenswrapper[4736]: W0214 10:41:44.546053 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.546142 4736 trace.go:236] Trace[477718185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 10:41:34.544) (total time: 10001ms): Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[477718185]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:41:44.546) Feb 14 10:41:44 crc kubenswrapper[4736]: Trace[477718185]: [10.001749531s] [10.001749531s] END Feb 14 10:41:44 crc kubenswrapper[4736]: E0214 10:41:44.546165 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.627131 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.627205 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.633505 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 10:41:44 crc kubenswrapper[4736]: I0214 10:41:44.633584 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 10:41:45 crc kubenswrapper[4736]: I0214 10:41:45.323459 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:01:45.877912867 +0000 UTC Feb 14 10:41:46 crc kubenswrapper[4736]: I0214 10:41:46.324156 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:38:45.663490321 +0000 UTC Feb 14 10:41:46 crc kubenswrapper[4736]: I0214 10:41:46.869901 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 10:41:46 crc kubenswrapper[4736]: I0214 10:41:46.870013 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.324307 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:02:32.191746534 +0000 UTC Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.780997 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.781138 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.782253 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.782286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.782296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:47 crc kubenswrapper[4736]: I0214 10:41:47.788289 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.231680 4736 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.325607 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:34:54.023905052 +0000 UTC Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.517996 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.519531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.519599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:48 crc kubenswrapper[4736]: I0214 10:41:48.519612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.219793 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.220041 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.222012 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.222074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.222091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.325962 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:55:41.080137141 +0000 UTC Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.603084 4736 trace.go:236] Trace[1330318921]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 10:41:36.648) (total time: 12954ms): Feb 14 10:41:49 crc kubenswrapper[4736]: Trace[1330318921]: ---"Objects listed" error: 12954ms (10:41:49.602) Feb 14 10:41:49 crc kubenswrapper[4736]: Trace[1330318921]: [12.954695247s] [12.954695247s] END Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.603124 4736 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 10:41:49 crc kubenswrapper[4736]: E0214 10:41:49.604359 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.607321 4736 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 14 10:41:49 crc kubenswrapper[4736]: E0214 10:41:49.608110 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.643399 4736 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.757088 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.757137 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.763149 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42850->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.763245 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42850->192.168.126.11:17697: read: connection reset by peer" Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.823292 4736 csr.go:261] certificate signing request csr-8hcz6 is approved, waiting to be issued Feb 14 10:41:49 crc kubenswrapper[4736]: I0214 10:41:49.839753 4736 csr.go:257] certificate signing request csr-8hcz6 is issued Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.149634 4736 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 14 10:41:50 crc kubenswrapper[4736]: W0214 10:41:50.149995 4736 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 14 10:41:50 crc kubenswrapper[4736]: E0214 10:41:50.150042 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.212:50180->38.102.83.212:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189416dcc535e1ed openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 10:41:30.920706541 +0000 UTC m=+1.289333909,LastTimestamp:2026-02-14 10:41:30.920706541 +0000 UTC m=+1.289333909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.326427 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:11:40.494247471 +0000 UTC Feb 14 10:41:50 crc kubenswrapper[4736]: E0214 10:41:50.479880 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.506292 4736 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.523878 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.525518 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064" exitCode=255 Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.525563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064"} Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.525712 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.526484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.526510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.526520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.527173 4736 scope.go:117] "RemoveContainer" containerID="8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064" Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.671693 4736 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.841405 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-14 10:36:49 +0000 UTC, rotation deadline is 2026-11-20 15:42:47.93972329 +0000 UTC Feb 14 10:41:50 crc kubenswrapper[4736]: I0214 10:41:50.841442 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6701h0m57.098283194s for next certificate rotation Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.152283 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.305727 4736 apiserver.go:52] "Watching apiserver" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.308421 4736 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.308688 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-8fm57","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309023 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309143 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309180 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309068 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309322 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.309336 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309372 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.309518 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.309574 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.309721 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.313830 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.313890 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.313841 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314130 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314233 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314241 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314534 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314630 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314650 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314778 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.314847 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.315225 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.316869 4736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317712 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317802 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317841 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317871 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317896 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317919 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317944 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317969 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.317997 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318019 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318048 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318072 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318094 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318119 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318148 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318174 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318195 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318223 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318248 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318274 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318294 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318318 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318340 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318442 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318464 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318496 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318567 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318615 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318650 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318673 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318721 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318765 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318788 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318810 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318832 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318855 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318898 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318926 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318948 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318970 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.318995 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319041 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319066 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319095 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319118 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319141 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319169 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319194 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319217 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319218 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319251 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319276 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319307 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319332 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319356 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319379 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319404 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319430 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319455 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319480 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319505 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319529 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319554 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319606 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319632 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319654 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319683 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319710 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319735 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319778 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319804 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319827 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319852 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319874 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319898 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319922 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319952 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319973 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320007 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320037 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320084 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320108 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320131 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320182 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320208 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320232 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320256 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320281 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320324 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320353 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320381 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320409 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320435 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320459 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320483 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320511 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320533 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320556 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320645 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320718 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320764 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320791 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320813 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320834 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320856 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320883 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320909 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320981 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321004 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321029 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321052 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321076 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321101 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321125 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321183 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321204 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321227 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321251 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321275 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321325 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321348 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321371 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321394 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321443 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321467 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321524 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321548 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321576 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321604 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321628 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321670 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321720 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321773 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321801 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321996 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322025 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322051 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322079 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322102 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322133 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322181 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322205 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322228 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322252 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322275 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322326 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322350 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322374 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322398 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322430 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322454 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322477 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322529 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322552 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322578 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322660 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322684 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322711 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322736 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322784 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322812 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322840 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322867 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322894 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322920 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322946 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322973 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.322999 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323029 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323088 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323111 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323160 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323184 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323253 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323286 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323316 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323350 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-hosts-file\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323411 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323436 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323466 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323524 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323558 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323584 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323612 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323640 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323669 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323700 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t88lg\" (UniqueName: \"kubernetes.io/projected/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-kube-api-access-t88lg\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323828 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323847 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.337512 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339483 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.340698 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319401 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319528 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319694 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319822 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.319967 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320003 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320135 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320197 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320309 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320384 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320625 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.320710 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321091 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321408 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.321627 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323005 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323286 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.323975 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.324378 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.324684 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.325084 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.325285 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.325640 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.325970 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.326297 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.326624 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.326912 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.326976 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.327278 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.327382 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.327486 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:41:51.827438209 +0000 UTC m=+22.196065577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.327814 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.327937 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.328154 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.328313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.328530 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.328685 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.328687 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.332392 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:18:13.727791982 +0000 UTC Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.332544 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.332672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.332736 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.332996 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.333937 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334008 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334130 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334170 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.334518 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334583 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334609 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334629 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334847 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.334901 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.335141 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.335917 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.336278 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.336393 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.336455 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.336476 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.336687 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.337031 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339461 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339515 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339791 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339978 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.339995 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.340286 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.340448 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.340574 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.340914 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.328917 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.341534 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.337078 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.341787 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.333647 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.342374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.345541 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:51.845514594 +0000 UTC m=+22.214141962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.349361 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:51.849342921 +0000 UTC m=+22.217970289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.347752 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.349390 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.349673 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.349901 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.350134 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.350558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.350697 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.350934 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.351162 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.352438 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.352498 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.352740 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.352818 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.352985 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.353222 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.353234 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.353456 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.353656 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.353902 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.354115 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.354485 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.354617 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.354986 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.355049 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.355198 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.363223 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.363585 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.363896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.364045 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.364066 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.364079 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.364138 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:51.864119984 +0000 UTC m=+22.232747352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.364394 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.364449 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.364874 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.365120 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.365319 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.365523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.365790 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.366232 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.366580 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.366864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.367103 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.367205 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.367338 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.367641 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.368042 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.368083 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.368305 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.368514 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.377548 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.377788 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.377901 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.378193 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.378283 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zm7d8"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.378583 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.378653 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.378977 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.379054 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.379710 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.379795 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.379822 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.380138 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.380444 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.380718 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.381010 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.381238 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.381684 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.382836 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.383159 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.383491 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.384368 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.384421 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.384555 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.384884 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.385011 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.385065 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.385378 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.385497 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.386095 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.386485 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.386938 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.387033 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.387078 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.387298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389093 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389136 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389303 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389564 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389621 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389789 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.389843 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.390215 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.388984 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.390573 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.390622 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.390640 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.390700 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:51.890681026 +0000 UTC m=+22.259308394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.391135 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.391026 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.391917 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.392122 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.392610 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.392978 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393036 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393244 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393287 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393439 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393598 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393670 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.393860 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.394024 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.394379 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.394865 4736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.396109 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.396313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.398616 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.398823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.399049 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.399066 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.402678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.404009 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.407128 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.410332 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.412954 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.414235 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.416165 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.416720 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-w6fw9"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.417399 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424092 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424464 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t88lg\" (UniqueName: \"kubernetes.io/projected/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-kube-api-access-t88lg\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424503 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-hosts-file\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424529 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-system-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424548 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424578 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424599 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424640 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424651 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424662 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424672 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424681 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424689 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424697 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424705 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424714 4736 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424723 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424731 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424784 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424794 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424802 4736 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424810 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424820 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424831 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424841 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424852 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424860 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424870 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424879 4736 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424887 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424897 4736 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424906 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424915 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424923 4736 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424932 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424944 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424952 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424960 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424968 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424976 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424984 4736 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.424992 4736 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425000 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425008 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425016 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425024 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425032 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425040 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425048 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425058 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425070 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425081 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425096 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425107 4736 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425118 4736 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425163 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425172 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425181 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425189 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425198 4736 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425207 4736 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425217 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425226 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425237 4736 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425245 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425253 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425261 4736 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425271 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425279 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425287 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425295 4736 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425303 4736 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425311 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425318 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425327 4736 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425337 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425346 4736 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425354 4736 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425362 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425370 4736 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425378 4736 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425385 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425394 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425402 4736 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425410 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425417 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425425 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425433 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425441 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425448 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425456 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425464 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425472 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425480 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425488 4736 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425496 4736 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425504 4736 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425512 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425521 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425530 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425539 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425547 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425555 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425563 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425571 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425579 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425587 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425595 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425604 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425612 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425620 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425628 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425636 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425645 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425653 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425662 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425670 4736 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425678 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425693 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425700 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425708 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425716 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425724 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425732 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425755 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425763 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425771 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425779 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425788 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425797 4736 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425805 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425814 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425822 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425829 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425836 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425844 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425851 4736 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425858 4736 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425866 4736 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425873 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425881 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425889 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425896 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425904 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425912 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425919 4736 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425926 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425934 4736 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425942 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425949 4736 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425956 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425964 4736 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425971 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425979 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425988 4736 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.425996 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426005 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426012 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426020 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426028 4736 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426036 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426044 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426051 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426060 4736 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426068 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426076 4736 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426083 4736 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426090 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426098 4736 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426106 4736 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426114 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426122 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426130 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426139 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426146 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426154 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426164 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426175 4736 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426186 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426196 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426205 4736 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426215 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426231 4736 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426242 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426249 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426256 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426264 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426272 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426280 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426288 4736 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426297 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426305 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426313 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426321 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426330 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426369 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426636 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-hosts-file\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.426677 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.430578 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.430853 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.431059 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.431189 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.431608 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2bpbj"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.433722 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.434035 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.434150 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.445643 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.446332 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.446524 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.446680 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.446885 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.447013 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.447137 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.447802 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.454517 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.455864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.463026 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t88lg\" (UniqueName: \"kubernetes.io/projected/c17edb3a-04a8-4c2d-8216-43dd45a1bf96-kube-api-access-t88lg\") pod \"node-resolver-8fm57\" (UID: \"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\") " pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.468661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.473199 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.492459 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.492860 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.509684 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-system-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-multus-certs\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531717 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-os-release\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-etc-kubernetes\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531926 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6qf\" (UniqueName: \"kubernetes.io/projected/db7224ab-d0ab-49e3-9154-4d9047057681-kube-api-access-rd6qf\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.531470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-system-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532112 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-hostroot\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532207 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-multus-daemon-config\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532307 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-cnibin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532406 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-multus\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532490 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532580 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqhxw\" (UniqueName: \"kubernetes.io/projected/6cb2b116-efd4-4f64-be6c-5cc5a0655589-kube-api-access-kqhxw\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532330 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-cni-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532848 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.532982 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cnibin\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533064 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-bin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533144 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-conf-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533248 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-k8s-cni-cncf-io\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533374 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-netns\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533468 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-system-cni-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533540 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533706 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-kubelet\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533786 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-os-release\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533871 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-cni-binary-copy\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.533959 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-socket-dir-parent\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.534071 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.534131 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.534185 4736 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.534254 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.536791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911"} Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.541244 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.553794 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.563178 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.577008 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.603489 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.611875 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.622666 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.629378 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.633567 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-multus-certs\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-os-release\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635713 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-etc-kubernetes\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635759 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-multus-certs\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635811 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-etc-kubernetes\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-os-release\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635735 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6qf\" (UniqueName: \"kubernetes.io/projected/db7224ab-d0ab-49e3-9154-4d9047057681-kube-api-access-rd6qf\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635895 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-hostroot\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635918 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-multus-daemon-config\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635938 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-cnibin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-multus\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.635981 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqhxw\" (UniqueName: \"kubernetes.io/projected/6cb2b116-efd4-4f64-be6c-5cc5a0655589-kube-api-access-kqhxw\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636019 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22bfc94a-170b-47f5-bc6b-c6e77720371d-rootfs\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636044 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22bfc94a-170b-47f5-bc6b-c6e77720371d-mcd-auth-proxy-config\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636080 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cnibin\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636104 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636116 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-multus\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636126 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-bin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636145 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-hostroot\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636155 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-conf-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-k8s-cni-cncf-io\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636201 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-netns\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636232 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-system-cni-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636277 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636300 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-kubelet\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-os-release\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-cni-binary-copy\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636371 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjt6v\" (UniqueName: \"kubernetes.io/projected/22bfc94a-170b-47f5-bc6b-c6e77720371d-kube-api-access-pjt6v\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-socket-dir-parent\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636416 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22bfc94a-170b-47f5-bc6b-c6e77720371d-proxy-tls\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636712 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-multus-daemon-config\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636778 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-cnibin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636788 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cnibin\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.636975 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-kubelet\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-os-release\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637230 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637656 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6cb2b116-efd4-4f64-be6c-5cc5a0655589-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637694 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-var-lib-cni-bin\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637720 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-conf-dir\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db7224ab-d0ab-49e3-9154-4d9047057681-cni-binary-copy\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637755 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-k8s-cni-cncf-io\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637780 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-host-run-netns\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637810 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-system-cni-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db7224ab-d0ab-49e3-9154-4d9047057681-multus-socket-dir-parent\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.637898 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6cb2b116-efd4-4f64-be6c-5cc5a0655589-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: W0214 10:41:51.646062 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-beb6f1236bab6f0aca02b9e2d4990568d33877cdec266d83bf813a178c645225 WatchSource:0}: Error finding container beb6f1236bab6f0aca02b9e2d4990568d33877cdec266d83bf813a178c645225: Status 404 returned error can't find the container with id beb6f1236bab6f0aca02b9e2d4990568d33877cdec266d83bf813a178c645225 Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.648369 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.658996 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqhxw\" (UniqueName: \"kubernetes.io/projected/6cb2b116-efd4-4f64-be6c-5cc5a0655589-kube-api-access-kqhxw\") pod \"multus-additional-cni-plugins-w6fw9\" (UID: \"6cb2b116-efd4-4f64-be6c-5cc5a0655589\") " pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.662884 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.671367 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6qf\" (UniqueName: \"kubernetes.io/projected/db7224ab-d0ab-49e3-9154-4d9047057681-kube-api-access-rd6qf\") pod \"multus-zm7d8\" (UID: \"db7224ab-d0ab-49e3-9154-4d9047057681\") " pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.675656 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.676547 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.690144 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 10:41:51 crc kubenswrapper[4736]: W0214 10:41:51.693559 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ca15a33cac577976065d62461f108f37f448423d083f1bf62f67ba3fad6c8fde WatchSource:0}: Error finding container ca15a33cac577976065d62461f108f37f448423d083f1bf62f67ba3fad6c8fde: Status 404 returned error can't find the container with id ca15a33cac577976065d62461f108f37f448423d083f1bf62f67ba3fad6c8fde Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.695856 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8fm57" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.696490 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.714197 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: W0214 10:41:51.720180 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17edb3a_04a8_4c2d_8216_43dd45a1bf96.slice/crio-6a5497447e5955585d8f64298e26d24ccbc91c2b8504331c59eeca9194dad8d3 WatchSource:0}: Error finding container 6a5497447e5955585d8f64298e26d24ccbc91c2b8504331c59eeca9194dad8d3: Status 404 returned error can't find the container with id 6a5497447e5955585d8f64298e26d24ccbc91c2b8504331c59eeca9194dad8d3 Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.732705 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.736903 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjt6v\" (UniqueName: \"kubernetes.io/projected/22bfc94a-170b-47f5-bc6b-c6e77720371d-kube-api-access-pjt6v\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.736945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22bfc94a-170b-47f5-bc6b-c6e77720371d-proxy-tls\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.736989 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22bfc94a-170b-47f5-bc6b-c6e77720371d-rootfs\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.737007 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22bfc94a-170b-47f5-bc6b-c6e77720371d-mcd-auth-proxy-config\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.737081 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22bfc94a-170b-47f5-bc6b-c6e77720371d-rootfs\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.737713 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22bfc94a-170b-47f5-bc6b-c6e77720371d-mcd-auth-proxy-config\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.746890 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.747271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22bfc94a-170b-47f5-bc6b-c6e77720371d-proxy-tls\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.752964 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zm7d8" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.758951 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.762420 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjt6v\" (UniqueName: \"kubernetes.io/projected/22bfc94a-170b-47f5-bc6b-c6e77720371d-kube-api-access-pjt6v\") pod \"machine-config-daemon-2bpbj\" (UID: \"22bfc94a-170b-47f5-bc6b-c6e77720371d\") " pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.763644 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.773912 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.774225 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.796091 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.803015 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 14 10:41:51 crc kubenswrapper[4736]: W0214 10:41:51.813067 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22bfc94a_170b_47f5_bc6b_c6e77720371d.slice/crio-ba868b2f89535f07d95234d9e666232fd571665e889373856a793562a90905d8 WatchSource:0}: Error finding container ba868b2f89535f07d95234d9e666232fd571665e889373856a793562a90905d8: Status 404 returned error can't find the container with id ba868b2f89535f07d95234d9e666232fd571665e889373856a793562a90905d8 Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.813427 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7vfr"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.815014 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.821186 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.830214 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.830405 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.830507 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.830656 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.830879 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.831146 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.831288 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.839065 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.839228 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:41:52.839214501 +0000 UTC m=+23.207841869 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.841118 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.850256 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.862988 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.880306 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.911603 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.925611 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940314 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2mr\" (UniqueName: \"kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940385 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940405 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940432 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940458 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940475 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940490 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940507 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940525 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940545 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940578 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940599 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940645 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940664 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940685 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940704 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940731 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940774 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940795 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940815 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940841 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.940866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941005 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941008 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941055 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941166 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941192 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941204 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941264 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:52.941246712 +0000 UTC m=+23.309874160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941291 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941092 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941304 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:52.941295513 +0000 UTC m=+23.309922971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941524 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:52.941505399 +0000 UTC m=+23.310132757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:51 crc kubenswrapper[4736]: E0214 10:41:51.941570 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:52.94154606 +0000 UTC m=+23.310173428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.956085 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:51 crc kubenswrapper[4736]: I0214 10:41:51.982610 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.003180 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.016450 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.024738 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042187 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042224 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042242 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042265 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042278 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042306 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042324 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042359 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2mr\" (UniqueName: \"kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042375 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042426 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042446 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042464 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042478 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042492 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042519 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042556 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042570 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042610 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042646 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042665 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042695 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042699 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042714 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042729 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042798 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042363 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042834 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042398 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042866 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042939 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.042968 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.043092 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.047507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.048473 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.050113 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.050353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.056907 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.059311 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2mr\" (UniqueName: \"kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr\") pod \"ovnkube-node-k7vfr\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.079896 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.096224 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.105086 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.122207 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.141406 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.345476 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:37:23.615944354 +0000 UTC Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.401979 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.402786 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.404519 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.405516 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.407005 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.407715 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.408793 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.410149 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.411204 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.412528 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.413319 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.414010 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.414578 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.415092 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.415680 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.416267 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.416836 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.417251 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.417808 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.418456 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.418997 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.419552 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.419983 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.420591 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.421051 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.421620 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.422255 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.425831 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.426625 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.427551 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.428144 4736 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.428336 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.430715 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.431460 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.431937 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.433444 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.434767 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.435372 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.436346 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.436975 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.437777 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.438337 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.439374 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.440031 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.440926 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.441445 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.442432 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.443252 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.444179 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.444702 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.445563 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.446081 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.446668 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.447588 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.540372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.540420 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.540433 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"014abc8cb508e493ff233b9c5948ed9974d68bcf06f9525071d2e4f8ceeb47d8"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.541851 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"beb6f1236bab6f0aca02b9e2d4990568d33877cdec266d83bf813a178c645225"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.543302 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" exitCode=0 Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.543373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.543400 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"d053642a6ed154e453d0dfa8f89d464d885d431feade9da44c93834d67e67440"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.545516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerStarted","Data":"a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.545549 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerStarted","Data":"5953adc8f9a5c440cc8a353a3748eda7e50bfa522728e46ad0969a7b76bd2730"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.547323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8fm57" event={"ID":"c17edb3a-04a8-4c2d-8216-43dd45a1bf96","Type":"ContainerStarted","Data":"22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.547353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8fm57" event={"ID":"c17edb3a-04a8-4c2d-8216-43dd45a1bf96","Type":"ContainerStarted","Data":"6a5497447e5955585d8f64298e26d24ccbc91c2b8504331c59eeca9194dad8d3"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.548808 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.548835 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ca15a33cac577976065d62461f108f37f448423d083f1bf62f67ba3fad6c8fde"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.550558 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.550600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.550622 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"ba868b2f89535f07d95234d9e666232fd571665e889373856a793562a90905d8"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.553008 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerStarted","Data":"e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.553059 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerStarted","Data":"c45ada0ece4420e4f1bf0f9eaab8a33faa1dfb97779362f6bf93278562639704"} Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.560076 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.577007 4736 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.603269 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.630263 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.642778 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.660132 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.675055 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.692159 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.720763 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.745713 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.762196 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.774062 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.782986 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.797655 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.806526 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.815945 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.824868 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.841733 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.849190 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.849332 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:41:54.84931716 +0000 UTC m=+25.217944528 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.855475 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.868593 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.879538 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.891117 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.904015 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.924432 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.936864 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.950753 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.950817 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.950852 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.950885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.950821 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.950957 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:54.95094025 +0000 UTC m=+25.319567618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.950988 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951005 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.950998 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951018 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.950988 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951098 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951102 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:54.951081344 +0000 UTC m=+25.319708752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951107 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951156 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:54.951114055 +0000 UTC m=+25.319741523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:52 crc kubenswrapper[4736]: E0214 10:41:52.951175 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:54.951164657 +0000 UTC m=+25.319792125 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.953794 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:52 crc kubenswrapper[4736]: I0214 10:41:52.970345 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:52Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.345608 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:04:52.652273207 +0000 UTC Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.353458 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jdrpk"] Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.353890 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.356534 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.356610 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.356827 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.356931 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.373462 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.386311 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.396430 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.396463 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.396476 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:53 crc kubenswrapper[4736]: E0214 10:41:53.396720 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.396470 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:53 crc kubenswrapper[4736]: E0214 10:41:53.396792 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:41:53 crc kubenswrapper[4736]: E0214 10:41:53.396852 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.411239 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.425630 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.449342 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.455195 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-serviceca\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.455261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2jql\" (UniqueName: \"kubernetes.io/projected/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-kube-api-access-q2jql\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.455309 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-host\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.463932 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.477763 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.494865 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.509032 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.528384 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.540692 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.554336 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.556544 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-host\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.556595 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-serviceca\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.556638 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2jql\" (UniqueName: \"kubernetes.io/projected/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-kube-api-access-q2jql\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.556770 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-host\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.558119 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-serviceca\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560143 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560188 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560202 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560212 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560223 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.560234 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.561732 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a" exitCode=0 Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.562254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a"} Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.565035 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.577569 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.579484 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2jql\" (UniqueName: \"kubernetes.io/projected/dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9-kube-api-access-q2jql\") pod \"node-ca-jdrpk\" (UID: \"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\") " pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.588054 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.598878 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.614620 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.625556 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.637737 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.652209 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.666470 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jdrpk" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.675721 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: W0214 10:41:53.689077 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1eac55_e1d7_4aaf_83a8_786d84e7a8a9.slice/crio-5426ff14b80fca28610b40af21f47f7b2a1bb1488e2076af745ca9ccfd453889 WatchSource:0}: Error finding container 5426ff14b80fca28610b40af21f47f7b2a1bb1488e2076af745ca9ccfd453889: Status 404 returned error can't find the container with id 5426ff14b80fca28610b40af21f47f7b2a1bb1488e2076af745ca9ccfd453889 Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.715501 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.759501 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.802193 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.835161 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.875619 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.879039 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.882452 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.894927 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.944813 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:53 crc kubenswrapper[4736]: I0214 10:41:53.985530 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:53Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.024664 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.058655 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.100462 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.147514 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.183844 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.216230 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.255331 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.297730 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.339211 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.346300 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:26:47.791766064 +0000 UTC Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.377351 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.416134 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.456983 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.498347 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.532764 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.565958 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jdrpk" event={"ID":"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9","Type":"ContainerStarted","Data":"ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6"} Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.566009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jdrpk" event={"ID":"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9","Type":"ContainerStarted","Data":"5426ff14b80fca28610b40af21f47f7b2a1bb1488e2076af745ca9ccfd453889"} Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.568025 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerStarted","Data":"804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b"} Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.569998 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616"} Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.580292 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.614278 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.653147 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.699680 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.734075 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.777361 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.816557 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.859760 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.868049 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.868175 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:41:58.868158462 +0000 UTC m=+29.236785830 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.896399 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.935718 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.969251 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.969290 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.969317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.969346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969403 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969409 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969438 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969467 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969488 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969489 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969505 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969513 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969444 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:58.969432092 +0000 UTC m=+29.338059460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969553 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:58.969537545 +0000 UTC m=+29.338164933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969586 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:58.969574686 +0000 UTC m=+29.338202074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:54 crc kubenswrapper[4736]: E0214 10:41:54.969616 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:41:58.969605487 +0000 UTC m=+29.338232865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:54 crc kubenswrapper[4736]: I0214 10:41:54.974956 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:54Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.022155 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.059146 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.095833 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.141049 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.174696 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.215915 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.258553 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.295481 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.336049 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.346407 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:52:07.001749423 +0000 UTC Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.379603 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.396677 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:55 crc kubenswrapper[4736]: E0214 10:41:55.396819 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.396675 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:55 crc kubenswrapper[4736]: E0214 10:41:55.396917 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.396675 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:55 crc kubenswrapper[4736]: E0214 10:41:55.396983 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.423893 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.461460 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.504846 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.540500 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.574463 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b" exitCode=0 Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.574566 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b"} Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.581472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.583869 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.616510 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.656723 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.697947 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.736917 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.775166 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.814903 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.857853 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.896644 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.936056 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:55 crc kubenswrapper[4736]: I0214 10:41:55.975776 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:55Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.008710 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.010862 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.010896 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.010909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.011032 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.015951 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.068169 4736 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.068422 4736 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.069469 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.069490 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.069500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.069511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.069519 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.085973 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.090465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.090501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.090511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.090528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.090539 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.101869 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.105616 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.110071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.110103 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.110111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.110122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.110131 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.121455 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.124729 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.124778 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.124786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.124799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.124810 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.142422 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.147044 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.149305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.149337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.149346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.149362 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.149374 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.161611 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: E0214 10:41:56.162262 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.163615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.163710 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.163795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.163862 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.163916 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.225262 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.239336 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.256756 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.267064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.267101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.267112 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.267131 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.267143 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.296657 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.337578 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.346636 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:21:30.156633968 +0000 UTC Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.369461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.369503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.369515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.369532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.369544 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.387303 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.472143 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.472176 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.472187 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.472203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.472215 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.574333 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.574382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.574393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.574410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.574420 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.589106 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.589112 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7" exitCode=0 Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.605155 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.619540 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.637246 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.651534 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.664671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.675697 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.677245 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.677286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.677298 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.677314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.677325 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.688893 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.708815 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.734026 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.775788 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.779425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.779460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.779468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.779481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.779490 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.819510 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.882401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.882451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.882461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.882477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.882495 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.883469 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.900477 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.934803 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.984840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.984877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.984888 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.984903 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.984914 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:56Z","lastTransitionTime":"2026-02-14T10:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:56 crc kubenswrapper[4736]: I0214 10:41:56.988978 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:56Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.087968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.088025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.088039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.088057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.088069 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.191393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.191631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.191641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.191656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.191666 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.294048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.294500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.294689 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.294902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.295072 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.347724 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:20:16.274642885 +0000 UTC Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.396259 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.396260 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.396276 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:57 crc kubenswrapper[4736]: E0214 10:41:57.396732 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:41:57 crc kubenswrapper[4736]: E0214 10:41:57.396723 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:41:57 crc kubenswrapper[4736]: E0214 10:41:57.396867 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.398365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.398410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.398428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.398450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.398467 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.500334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.500419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.500436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.500460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.500476 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.594956 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340" exitCode=0 Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.595002 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.603115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.603146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.603154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.603167 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.603176 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.610610 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.621282 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.634301 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.674620 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.693298 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.705188 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.705225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.705239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.705256 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.705267 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.710164 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.726265 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.738146 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.756349 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.778912 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.792634 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.803905 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.808082 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.808162 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.808330 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.808358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.808372 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.827280 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.846516 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.863895 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.910503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.910537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.910547 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.910562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:57 crc kubenswrapper[4736]: I0214 10:41:57.910574 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:57Z","lastTransitionTime":"2026-02-14T10:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.012952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.012989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.013000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.013019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.013031 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.115535 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.115574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.115585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.115601 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.115611 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.218436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.218480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.218489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.218503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.218512 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.321512 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.321572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.321581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.321597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.321606 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.348739 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:20:27.38429563 +0000 UTC Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.423727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.423786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.423799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.423816 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.423827 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.525704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.525760 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.525769 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.525784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.525793 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.600492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerStarted","Data":"a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.605260 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.605581 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.605605 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.612190 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.623557 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.629312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.629339 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.629347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.629361 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.629370 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.632311 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.633002 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.634895 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.646008 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.674371 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.710118 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.719523 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.730505 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.732428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.732461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.732470 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.732484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.732493 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.743054 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.763446 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.777883 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.790265 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.808064 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.820585 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.831376 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.834781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.834810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.834818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.834831 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.834840 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.848097 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.863048 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.875662 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.884960 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.899279 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.910778 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:41:58 crc kubenswrapper[4736]: E0214 10:41:58.910917 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:42:06.910898596 +0000 UTC m=+37.279525964 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.923686 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.937930 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.938461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.938486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.938494 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.938507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.938516 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:58Z","lastTransitionTime":"2026-02-14T10:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.954990 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.975073 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:58 crc kubenswrapper[4736]: I0214 10:41:58.988498 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:58Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.002361 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.012027 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.012079 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.012112 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.012145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012217 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012303 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:07.01228238 +0000 UTC m=+37.380909818 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012303 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012336 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012349 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012366 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012380 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012429 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:07.012412994 +0000 UTC m=+37.381040412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012352 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012477 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:07.012469655 +0000 UTC m=+37.381097113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012224 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.012502 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:07.012496536 +0000 UTC m=+37.381124004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.016693 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.040242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.040280 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.040290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.040306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.040317 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.041289 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.054944 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.066990 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.142687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.142771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.142789 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.142815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.142832 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.245123 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.245163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.245175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.245190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.245202 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348671 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348765 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348778 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.348946 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:56:19.292546415 +0000 UTC Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.396501 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.396537 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.396527 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.396649 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.396754 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:41:59 crc kubenswrapper[4736]: E0214 10:41:59.396864 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.451188 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.451234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.451246 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.451265 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.451278 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.553650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.553714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.553777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.553800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.553849 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.613019 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2" exitCode=0 Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.613080 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.613349 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.634499 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.649557 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.656472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.656514 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.656526 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.656542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.656553 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.666796 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.682618 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.706068 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.722680 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.739678 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.754637 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.759801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.759836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.759847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.759866 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.759877 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.776401 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.789597 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.801932 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.823555 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.842307 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.856886 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.864459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.864782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.864795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.864810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.864821 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.872794 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:41:59Z is after 2025-08-24T17:21:41Z" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.967663 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.967718 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.967733 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.967931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:41:59 crc kubenswrapper[4736]: I0214 10:41:59.967959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:41:59Z","lastTransitionTime":"2026-02-14T10:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.069905 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.069944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.069955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.069969 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.069979 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.172540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.172589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.172600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.172618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.172630 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.275668 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.275772 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.275793 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.275820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.275837 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.349875 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:16:57.367075622 +0000 UTC Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.379141 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.379184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.379193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.379207 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.379216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.417793 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.428258 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.441303 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.452963 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.470153 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.481003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.481066 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.481089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.481120 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.481144 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.487654 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.502482 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.516220 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.529942 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.542270 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.563994 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.583678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.583771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.583784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.583801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.583813 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.587274 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.603434 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.619334 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.619967 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerDied","Data":"9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.619884 4736 generic.go:334] "Generic (PLEG): container finished" podID="6cb2b116-efd4-4f64-be6c-5cc5a0655589" containerID="9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586" exitCode=0 Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.620310 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.642336 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.663039 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.678069 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.686129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.686220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.686239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.686261 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.686318 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.694862 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.707762 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.722322 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.739869 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.754585 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.772397 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.788682 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.788725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.788736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.788768 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.788787 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.797394 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.814930 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.833554 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.850852 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.871481 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.891252 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.891284 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.891292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.891307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.891316 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.892110 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.909630 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.920805 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.933994 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.945515 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.957696 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.967771 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.977623 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.987863 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.993134 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.993171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.993180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.993194 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:00 crc kubenswrapper[4736]: I0214 10:42:00.993204 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:00Z","lastTransitionTime":"2026-02-14T10:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.003084 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.014488 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.024842 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.039107 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.053454 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.075224 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.095652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.095680 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.095687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.095701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.095709 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.096222 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.135045 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.180733 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.198407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.198654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.198727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.198828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.198895 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.302088 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.302130 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.302140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.302157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.302169 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.350757 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:34:01.789451875 +0000 UTC Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.396296 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:01 crc kubenswrapper[4736]: E0214 10:42:01.396415 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.396316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:01 crc kubenswrapper[4736]: E0214 10:42:01.396493 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.396296 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:01 crc kubenswrapper[4736]: E0214 10:42:01.396540 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.404340 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.404368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.404376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.404388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.404397 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.507079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.507128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.507139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.507155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.507168 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.609778 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.609815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.609836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.609853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.609863 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.625161 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/0.log" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.628182 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3" exitCode=1 Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.628254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.629075 4736 scope.go:117] "RemoveContainer" containerID="e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.633632 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" event={"ID":"6cb2b116-efd4-4f64-be6c-5cc5a0655589","Type":"ContainerStarted","Data":"d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.652794 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.666167 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.677300 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.687303 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.700168 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.701768 4736 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.712043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.712082 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.712096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.712156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.712168 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.713100 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.725135 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.737318 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.750769 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.773118 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.799458 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.824283 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.847942 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.858790 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.877006 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.894639 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.909221 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.919485 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.936278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.936321 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.936331 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.936348 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.936360 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:01Z","lastTransitionTime":"2026-02-14T10:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.955009 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:01 crc kubenswrapper[4736]: I0214 10:42:01.999157 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.038188 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.038234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.038244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.038259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.038270 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.045821 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.079582 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.123061 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.146138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.146170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.146178 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.146193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.146203 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.161709 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.203817 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.236772 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.248941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.248967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.248975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.248989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.248999 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.276959 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.323654 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.350612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.350658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.350669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.350685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.350696 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.351008 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:39:08.677182773 +0000 UTC Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.356798 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.396036 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.452914 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.452947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.452956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.452968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.452975 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.555334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.555380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.555390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.555406 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.555418 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.638266 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/1.log" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.639016 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/0.log" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.641445 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4" exitCode=1 Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.641530 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.641698 4736 scope.go:117] "RemoveContainer" containerID="e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.642415 4736 scope.go:117] "RemoveContainer" containerID="dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4" Feb 14 10:42:02 crc kubenswrapper[4736]: E0214 10:42:02.642571 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.657174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.657210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.657219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.657233 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.657243 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.664195 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.676398 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.690693 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.710438 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.723207 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.732538 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.743803 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.755109 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.758820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.758859 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.758871 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.758886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.758897 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.766012 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.793330 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.837677 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.861067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.861123 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.861136 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.861169 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.861196 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.875639 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.914261 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.955948 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.963261 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.963351 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.963370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.963413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.963425 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:02Z","lastTransitionTime":"2026-02-14T10:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:02 crc kubenswrapper[4736]: I0214 10:42:02.997854 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.066291 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.066370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.066388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.066437 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.066452 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.169047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.169342 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.169424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.169515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.169607 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.271608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.271704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.271712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.271724 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.271732 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.351232 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:28:30.678713958 +0000 UTC Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.374624 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.375354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.375393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.375421 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.375446 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.396967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.396967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:03 crc kubenswrapper[4736]: E0214 10:42:03.397406 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:03 crc kubenswrapper[4736]: E0214 10:42:03.397405 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.396992 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:03 crc kubenswrapper[4736]: E0214 10:42:03.397971 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.477678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.477716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.477725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.477764 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.477773 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.580591 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.580624 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.580633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.580649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.580658 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.649184 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/1.log" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.683024 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.683062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.683073 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.683089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.683101 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.727256 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc"] Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.727728 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.729523 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.730012 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.741569 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.755594 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.764559 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.764849 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.764941 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.765023 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftszz\" (UniqueName: \"kubernetes.io/projected/04011cfa-0fe1-47af-b7bc-a9895caff97f-kube-api-access-ftszz\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.769237 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.780252 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.784770 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.784800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.784811 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.784827 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.784840 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.789734 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.805695 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.819127 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.840453 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.853525 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.864905 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.865968 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.866075 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.866160 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.866227 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftszz\" (UniqueName: \"kubernetes.io/projected/04011cfa-0fe1-47af-b7bc-a9895caff97f-kube-api-access-ftszz\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.866766 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.867187 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.872216 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04011cfa-0fe1-47af-b7bc-a9895caff97f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.876835 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.885046 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftszz\" (UniqueName: \"kubernetes.io/projected/04011cfa-0fe1-47af-b7bc-a9895caff97f-kube-api-access-ftszz\") pod \"ovnkube-control-plane-749d76644c-q4qqc\" (UID: \"04011cfa-0fe1-47af-b7bc-a9895caff97f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.886570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.886610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.886622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.886638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.886649 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.887621 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.899429 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.907593 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.918165 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.933031 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.989371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.989653 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.989734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.989855 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:03 crc kubenswrapper[4736]: I0214 10:42:03.989934 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:03Z","lastTransitionTime":"2026-02-14T10:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.040511 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" Feb 14 10:42:04 crc kubenswrapper[4736]: W0214 10:42:04.051649 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04011cfa_0fe1_47af_b7bc_a9895caff97f.slice/crio-177c3053b24777af189ef9483d791d8f76d01b09c76458d0ae429d49d9eaa62a WatchSource:0}: Error finding container 177c3053b24777af189ef9483d791d8f76d01b09c76458d0ae429d49d9eaa62a: Status 404 returned error can't find the container with id 177c3053b24777af189ef9483d791d8f76d01b09c76458d0ae429d49d9eaa62a Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.093145 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.093183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.093191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.093207 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.093216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.197350 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.197397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.197408 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.197425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.197437 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.299470 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.299763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.299872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.299962 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.300027 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.352155 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:22:39.669559438 +0000 UTC Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.402241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.402288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.402299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.402316 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.402328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.505354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.505410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.505422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.505441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.505453 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.607657 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.607690 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.607709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.607725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.607760 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.657272 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" event={"ID":"04011cfa-0fe1-47af-b7bc-a9895caff97f","Type":"ContainerStarted","Data":"177c3053b24777af189ef9483d791d8f76d01b09c76458d0ae429d49d9eaa62a"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.710444 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.710516 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.710540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.710569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.710627 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.814906 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.814972 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.814990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.815022 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.815042 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.917672 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.917712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.917725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.917765 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:04 crc kubenswrapper[4736]: I0214 10:42:04.917778 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:04Z","lastTransitionTime":"2026-02-14T10:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.020396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.020442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.020458 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.020480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.020496 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.122885 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.122921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.122932 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.122947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.122957 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.195308 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-przcz"] Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.195777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.195855 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.210714 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.220830 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.225352 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.225463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.225550 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.225636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.225862 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.230984 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.243136 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.255655 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.264684 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.281666 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.281884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.282178 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkdjt\" (UniqueName: \"kubernetes.io/projected/df467c01-3f4e-41c8-b5fa-b14831cfe827-kube-api-access-kkdjt\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.294986 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.305365 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.320090 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.328252 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.328293 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.328306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.328323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.328335 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.329358 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.339894 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.348886 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.353469 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:06:28.100188015 +0000 UTC Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.358882 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.368182 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.378576 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.387279 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.387328 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkdjt\" (UniqueName: \"kubernetes.io/projected/df467c01-3f4e-41c8-b5fa-b14831cfe827-kube-api-access-kkdjt\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.387611 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.387727 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:05.887711373 +0000 UTC m=+36.256338731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.391282 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.396420 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.396421 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.396629 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.396626 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.396768 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.396834 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.404730 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkdjt\" (UniqueName: \"kubernetes.io/projected/df467c01-3f4e-41c8-b5fa-b14831cfe827-kube-api-access-kkdjt\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.430934 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.431152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.431244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.431310 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.431380 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.533796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.533829 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.533838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.533853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.533863 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.636693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.636734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.636765 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.636784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.636797 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.661609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" event={"ID":"04011cfa-0fe1-47af-b7bc-a9895caff97f","Type":"ContainerStarted","Data":"9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.661655 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" event={"ID":"04011cfa-0fe1-47af-b7bc-a9895caff97f","Type":"ContainerStarted","Data":"97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.684304 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.700174 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.713349 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.736400 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.739250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.739285 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.739296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.739311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.739323 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.750630 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.765474 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.777248 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.794141 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.806198 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.818316 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.827917 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.841788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.841985 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.842063 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.842146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.842204 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.844474 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.857849 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.868676 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.881229 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.891213 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.891366 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:05 crc kubenswrapper[4736]: E0214 10:42:05.891417 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:06.891403143 +0000 UTC m=+37.260030511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.894410 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.903000 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:05Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.945163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.945214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.945229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.945251 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:05 crc kubenswrapper[4736]: I0214 10:42:05.945270 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:05Z","lastTransitionTime":"2026-02-14T10:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.048078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.048118 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.048132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.048150 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.048161 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.150412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.150454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.150464 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.150481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.150491 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.252704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.252774 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.252789 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.252806 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.252817 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.353886 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:45:11.809915621 +0000 UTC Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.355958 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.356043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.356072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.356090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.356102 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.397098 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.397377 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.441238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.441292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.441308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.441329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.441344 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.456652 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:06Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.460336 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.460381 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.460392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.460409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.460421 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.473406 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:06Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.480237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.480275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.480286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.480301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.480312 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.490878 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:06Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.495599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.495628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.495638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.495654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.495664 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.506646 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:06Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.509522 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.509567 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.509575 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.509588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.509596 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.519543 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:06Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.519658 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.521098 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.521124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.521135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.521150 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.521163 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.624292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.624379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.624400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.624428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.624446 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.727016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.727080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.727152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.727214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.727240 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.831084 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.831458 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.831478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.831500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.831517 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.903610 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.903803 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:06 crc kubenswrapper[4736]: E0214 10:42:06.903888 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:08.903868184 +0000 UTC m=+39.272495572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.933882 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.933956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.933995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.934034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:06 crc kubenswrapper[4736]: I0214 10:42:06.934058 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:06Z","lastTransitionTime":"2026-02-14T10:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.004638 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.004839 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:42:23.004813296 +0000 UTC m=+53.373440664 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.036822 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.036879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.036897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.036977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.037000 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.105896 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.105957 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.106016 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.106084 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106121 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106146 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106159 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106158 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106203 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106212 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:23.10619602 +0000 UTC m=+53.474823388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106267 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106293 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:23.106268572 +0000 UTC m=+53.474895980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106306 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106323 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:23.106307363 +0000 UTC m=+53.474934771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106332 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.106409 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:23.106391035 +0000 UTC m=+53.475018433 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.139245 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.139304 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.139320 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.139346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.139364 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.241727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.241782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.241791 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.241805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.241813 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.343955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.344019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.344058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.344092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.344130 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.355087 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:15:14.440336153 +0000 UTC Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.396601 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.396601 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.396798 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.396803 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.396602 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:07 crc kubenswrapper[4736]: E0214 10:42:07.396932 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.447158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.447210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.447225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.447248 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.447266 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.568106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.568157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.568169 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.568189 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.568202 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.673349 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.673395 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.673413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.673434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.673446 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.776008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.776046 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.776058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.776075 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.776090 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.879009 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.879066 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.879083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.879108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.879124 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.981200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.981243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.981255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.981273 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:07 crc kubenswrapper[4736]: I0214 10:42:07.981284 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:07Z","lastTransitionTime":"2026-02-14T10:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.084782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.084858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.084876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.084901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.084918 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.187460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.187496 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.187509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.187524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.187535 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.289959 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.290299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.290434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.290546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.290654 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.355466 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:51:01.695698026 +0000 UTC Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.393951 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.394270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.394486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.394803 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.395054 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.396776 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:08 crc kubenswrapper[4736]: E0214 10:42:08.396878 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.498078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.498378 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.498466 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.498572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.498655 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.602017 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.602072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.602089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.602112 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.602128 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.704573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.704622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.704633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.704651 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.704661 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.807046 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.807155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.807173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.807213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.807233 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.909815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.909904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.909924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.909984 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.910004 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:08Z","lastTransitionTime":"2026-02-14T10:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:08 crc kubenswrapper[4736]: I0214 10:42:08.925279 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:08 crc kubenswrapper[4736]: E0214 10:42:08.925465 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:08 crc kubenswrapper[4736]: E0214 10:42:08.925577 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:12.925552677 +0000 UTC m=+43.294180095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.013615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.013706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.013889 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.013995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.014105 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.116881 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.116930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.116942 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.116961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.116974 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.220175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.220212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.220223 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.220239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.220251 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.322581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.322634 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.322653 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.322679 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.322697 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.356152 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:15:34.392761423 +0000 UTC Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.397127 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:09 crc kubenswrapper[4736]: E0214 10:42:09.397517 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.397304 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.397174 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:09 crc kubenswrapper[4736]: E0214 10:42:09.397883 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:09 crc kubenswrapper[4736]: E0214 10:42:09.398004 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.425375 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.425428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.425440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.425474 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.425487 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.528221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.528274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.528290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.528313 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.528330 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.631480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.631559 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.631579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.631614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.631642 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.734686 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.734774 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.734791 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.734815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.734836 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.838480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.838529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.838544 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.838569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.838585 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.940831 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.940863 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.940872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.940885 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:09 crc kubenswrapper[4736]: I0214 10:42:09.940895 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:09Z","lastTransitionTime":"2026-02-14T10:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.046359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.046418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.046436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.046461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.046483 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.150302 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.150356 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.150373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.150392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.150404 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.253554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.253613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.253624 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.253645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.253659 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.356410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.356478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.356500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.356528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.356558 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.357468 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:39:38.147165588 +0000 UTC Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.397260 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:10 crc kubenswrapper[4736]: E0214 10:42:10.397437 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.423218 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.436067 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.450049 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.458946 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.458993 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.459003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.459048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.459061 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.463136 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.481536 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.495285 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.505670 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.522156 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.535250 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.548613 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.562160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.562412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.562557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.562693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.562851 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.583162 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33899113ff7da6ab051091bc28607c3a669ef63f324c50fb0e5272160e614f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"message\\\":\\\"10:42:01.364036 5878 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0214 10:42:01.364042 5878 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0214 10:42:01.364059 5878 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 10:42:01.364066 5878 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 10:42:01.364071 5878 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:01.364077 5878 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:01.364083 5878 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 10:42:01.364090 5878 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:01.364431 5878 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364533 5878 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:01.364844 5878 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 10:42:01.365308 5878 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.599267 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.630503 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.647271 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.665370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.665438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.665449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.665489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.665502 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.666226 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.685647 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.697699 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.768793 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.768856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.768866 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.768881 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.768890 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.871498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.871814 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.871910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.872008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.872089 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.975472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.975823 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.975913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.976020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:10 crc kubenswrapper[4736]: I0214 10:42:10.976109 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:10Z","lastTransitionTime":"2026-02-14T10:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.079126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.079168 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.079180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.079196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.079207 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.182048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.182129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.182160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.182193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.182216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.285348 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.285390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.285402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.285418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.285428 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.357940 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:27:26.470022939 +0000 UTC Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.387910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.387975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.387993 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.388018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.388037 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.397212 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.397212 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.397232 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:11 crc kubenswrapper[4736]: E0214 10:42:11.397367 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:11 crc kubenswrapper[4736]: E0214 10:42:11.397511 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:11 crc kubenswrapper[4736]: E0214 10:42:11.397685 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.490708 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.490775 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.490786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.490805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.490817 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.593635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.593682 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.593697 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.593718 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.593734 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.696454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.696491 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.696501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.696514 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.696523 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.799174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.799244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.799267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.799297 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.799319 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.901741 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.901856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.901880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.901908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:11 crc kubenswrapper[4736]: I0214 10:42:11.901930 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:11Z","lastTransitionTime":"2026-02-14T10:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.004218 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.004255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.004265 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.004280 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.004290 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.107678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.107736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.107788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.107817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.107839 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.210172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.210222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.210239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.210260 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.210277 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.313077 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.313129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.313138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.313153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.313162 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.358652 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:51:59.184576793 +0000 UTC Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.396158 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:12 crc kubenswrapper[4736]: E0214 10:42:12.396297 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.416171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.416210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.416220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.416231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.416240 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.519681 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.519736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.519785 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.519810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.519828 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.622609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.622666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.622682 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.622706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.622724 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.725315 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.725363 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.725383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.725407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.725424 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.828424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.828479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.828495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.828520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.828544 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.932413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.932521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.932541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.932564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.932580 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:12Z","lastTransitionTime":"2026-02-14T10:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.974725 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.974925 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:42:12 crc kubenswrapper[4736]: E0214 10:42:12.974957 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:12 crc kubenswrapper[4736]: E0214 10:42:12.975189 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:20.975158003 +0000 UTC m=+51.343785401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.976287 4736 scope.go:117] "RemoveContainer" containerID="dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4" Feb 14 10:42:12 crc kubenswrapper[4736]: I0214 10:42:12.993878 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:12Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.018385 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.032240 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.036179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.036212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.036240 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.036255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.036264 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.042919 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.058847 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.073361 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.091597 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.109837 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.123897 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.138578 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.138613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.138623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.138640 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.138651 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.155043 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.171449 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.190169 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.205385 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.223836 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.238518 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.241597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.241639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.241656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.241678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.241694 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.255631 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.269244 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.344401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.344446 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.344460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.344479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.344492 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.359789 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:55:24.296509734 +0000 UTC Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.396533 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.396533 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:13 crc kubenswrapper[4736]: E0214 10:42:13.396730 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:13 crc kubenswrapper[4736]: E0214 10:42:13.396641 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.396542 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:13 crc kubenswrapper[4736]: E0214 10:42:13.396813 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.446528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.446564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.446577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.446593 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.446605 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.548795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.548928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.548939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.548956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.548966 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.651341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.651394 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.651413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.651450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.651468 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.695674 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/1.log" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.697922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.698788 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.740918 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.753573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.753626 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.753635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.753652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.753662 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.763783 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.775269 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.792554 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.802185 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.814110 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.826314 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.838353 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.849354 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.855975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.856006 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.856015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.856040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.856049 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.860526 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.870177 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.883250 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.895430 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.904949 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.916865 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.930533 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.940395 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:13Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.958818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.958869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.958878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.958892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:13 crc kubenswrapper[4736]: I0214 10:42:13.958902 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:13Z","lastTransitionTime":"2026-02-14T10:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.061071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.061129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.061140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.061158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.061169 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.164151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.164183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.164193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.164206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.164214 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.266830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.266869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.266879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.266894 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.266905 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.360300 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:36:23.908983605 +0000 UTC Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.369807 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.369871 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.369888 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.369912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.369929 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.396477 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:14 crc kubenswrapper[4736]: E0214 10:42:14.396673 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.472798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.472857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.472885 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.472909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.472927 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.576068 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.576146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.576171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.576199 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.576217 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.679329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.679380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.679396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.679418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.679438 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.782328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.782395 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.782404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.782419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.782427 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.885386 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.885543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.885569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.885598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.885622 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.988139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.988201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.988213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.988229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:14 crc kubenswrapper[4736]: I0214 10:42:14.988241 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:14Z","lastTransitionTime":"2026-02-14T10:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.090909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.090960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.090976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.090998 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.091014 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.193685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.193783 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.193825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.193858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.193881 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.297112 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.297242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.297263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.297291 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.297310 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.361387 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:57:04.717007977 +0000 UTC Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.396917 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.396972 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.396917 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:15 crc kubenswrapper[4736]: E0214 10:42:15.397140 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:15 crc kubenswrapper[4736]: E0214 10:42:15.397319 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:15 crc kubenswrapper[4736]: E0214 10:42:15.397404 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.400781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.400803 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.400812 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.400825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.400833 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.505015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.505089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.505112 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.505140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.505161 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.608300 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.608380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.608404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.608434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.608458 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.711318 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.711370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.711388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.711409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.711427 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.813734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.813792 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.813802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.813824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.813835 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.916023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.916056 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.916063 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.916076 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:15 crc kubenswrapper[4736]: I0214 10:42:15.916084 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:15Z","lastTransitionTime":"2026-02-14T10:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.019137 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.019200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.019219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.019244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.019264 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.122270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.122362 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.122380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.122407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.122425 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.224863 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.224918 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.224936 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.224960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.224977 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.327987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.328042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.328058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.328080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.328096 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.362493 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:47:55.965316977 +0000 UTC Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.397292 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.397584 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.431298 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.431353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.431371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.431396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.431412 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.534664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.534730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.534788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.534821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.534845 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.543359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.543401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.543418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.543447 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.543483 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.565650 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:16Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.570880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.570961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.570979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.570998 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.571040 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.590229 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:16Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.595654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.595726 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.595784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.595830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.595855 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.612682 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:16Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.616787 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.616825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.616833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.616849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.616858 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.629696 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:16Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.633643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.633679 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.633687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.633700 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.633711 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.647411 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:16Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:16 crc kubenswrapper[4736]: E0214 10:42:16.647557 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.648919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.648966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.648985 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.649003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.649014 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.751261 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.751330 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.751353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.751385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.751407 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.854999 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.855473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.855628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.855816 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.855952 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.958542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.958595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.958611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.958631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:16 crc kubenswrapper[4736]: I0214 10:42:16.958646 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:16Z","lastTransitionTime":"2026-02-14T10:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.061628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.061682 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.061694 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.061714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.061727 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.164849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.164920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.164938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.164963 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.164980 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.268000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.268048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.268065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.268091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.268108 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.362698 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:52:53.154790923 +0000 UTC Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.371436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.371498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.371511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.371527 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.371539 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.396144 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.396171 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.396150 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:17 crc kubenswrapper[4736]: E0214 10:42:17.396280 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:17 crc kubenswrapper[4736]: E0214 10:42:17.396429 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:17 crc kubenswrapper[4736]: E0214 10:42:17.396637 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.474301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.474346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.474360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.474377 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.474389 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.577181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.577362 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.577375 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.577397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.577411 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.679473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.679525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.679538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.679557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.679570 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.782770 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.782851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.782869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.782896 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.782917 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.886108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.886205 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.886221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.886244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.886261 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.989346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.989413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.989431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.989457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:17 crc kubenswrapper[4736]: I0214 10:42:17.989476 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:17Z","lastTransitionTime":"2026-02-14T10:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.092727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.092836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.092856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.092880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.092896 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.195324 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.195364 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.195404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.195423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.195455 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.298119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.298192 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.298209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.298238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.298255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.363726 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:58:01.196727102 +0000 UTC Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.396527 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:18 crc kubenswrapper[4736]: E0214 10:42:18.396698 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.401425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.401515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.401535 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.401593 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.401613 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.516566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.516618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.516634 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.516665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.516682 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.620615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.620675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.620695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.620719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.620738 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.724221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.724312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.724337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.724369 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.724393 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.828025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.828103 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.828128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.828159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.828182 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.931069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.931113 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.931126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.931144 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:18 crc kubenswrapper[4736]: I0214 10:42:18.931156 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:18Z","lastTransitionTime":"2026-02-14T10:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.034064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.034116 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.034133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.034156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.034174 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.137260 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.137631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.137855 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.138057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.138242 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.203015 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.211307 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.225949 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.237581 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.241220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.241270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.241287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.241311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.241328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.250245 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.258977 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.280576 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.291537 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.305432 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.320819 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.338622 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.343289 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.343342 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.343360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.343384 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.343402 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.351815 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.364794 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:53:00.672051307 +0000 UTC Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.375935 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.399633 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.399694 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.399669 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.399674 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:19 crc kubenswrapper[4736]: E0214 10:42:19.399858 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:19 crc kubenswrapper[4736]: E0214 10:42:19.400017 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:19 crc kubenswrapper[4736]: E0214 10:42:19.400138 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:19 crc kubenswrapper[4736]: E0214 10:42:19.400242 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.401580 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.414703 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.434429 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.446711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.446775 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.446788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.446806 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.446841 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.451375 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.467321 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.480661 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:19Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.548509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.548585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.548597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.548615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.548627 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.651013 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.651063 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.651074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.651091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.651102 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.754156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.754217 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.754236 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.754263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.754282 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.857057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.857115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.857133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.857159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.857177 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.960195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.960254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.960271 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.960296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:19 crc kubenswrapper[4736]: I0214 10:42:19.960315 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:19Z","lastTransitionTime":"2026-02-14T10:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.062827 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.062900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.062946 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.062975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.062997 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.165686 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.165730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.165779 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.165809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.165841 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.267532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.267652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.267662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.267677 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.267689 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.365793 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:13:21.344262055 +0000 UTC Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.371121 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.371190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.371207 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.371230 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.371248 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.415580 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.426838 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.445031 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.458030 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.470390 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.473618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.473667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.473685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.473712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.473735 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.490777 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.505656 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.521546 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.536552 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.552779 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.564468 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.575931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.575979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.575991 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.576020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.576033 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.577063 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.585808 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.611863 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.624908 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.643503 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.657198 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.671365 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.678104 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.678150 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.678160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.678175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.678188 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.780365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.780432 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.780457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.780486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.780512 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.882437 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.882481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.882491 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.882509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.882525 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.985706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.985832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.985855 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.985879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:20 crc kubenswrapper[4736]: I0214 10:42:20.985896 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:20Z","lastTransitionTime":"2026-02-14T10:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.070537 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.070727 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.070874 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:37.07084577 +0000 UTC m=+67.439473178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.088570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.088605 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.088625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.088643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.088653 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.192128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.192215 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.192232 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.192255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.192272 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.295405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.295471 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.295488 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.295512 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.295530 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.365893 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:11:37.641661005 +0000 UTC Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.396528 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.396534 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.397017 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.397291 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.397317 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.397453 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.397699 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:21 crc kubenswrapper[4736]: E0214 10:42:21.397821 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.398196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.398234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.398248 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.398269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.398281 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.501456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.501543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.501568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.501598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.501619 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.604930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.605006 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.605028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.605085 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.605108 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.707317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.707373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.707385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.707408 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.707421 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.809225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.809479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.809547 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.809623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.809693 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.912308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.912566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.912651 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.912733 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:21 crc kubenswrapper[4736]: I0214 10:42:21.912847 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:21Z","lastTransitionTime":"2026-02-14T10:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.015368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.015715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.015951 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.016180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.016355 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.118826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.119176 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.119405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.119501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.119599 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.222058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.222132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.222159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.222186 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.222207 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.324904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.324943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.324954 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.324969 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.324980 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.366329 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:50:28.271902334 +0000 UTC Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.427263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.427300 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.427311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.427327 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.427338 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.528706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.528771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.528781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.528794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.528803 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.631527 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.631566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.631575 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.631589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.631601 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.734577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.734609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.734616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.734629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.734638 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.836976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.837013 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.837021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.837034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.837043 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.939437 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.939737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.939901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.940080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:22 crc kubenswrapper[4736]: I0214 10:42:22.940263 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:22Z","lastTransitionTime":"2026-02-14T10:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.043280 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.043311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.043320 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.043332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.043341 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.092459 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.092734 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:42:55.092682517 +0000 UTC m=+85.461309895 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.147633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.147694 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.147715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.147776 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.147799 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.193666 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.193781 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.193846 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.193909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.193963 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194012 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194038 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194050 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194112 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194125 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:55.194099752 +0000 UTC m=+85.562727150 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194120 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194238 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:55.194209135 +0000 UTC m=+85.562836543 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194253 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194281 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194291 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:55.194269067 +0000 UTC m=+85.562896475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.194407 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:42:55.19438784 +0000 UTC m=+85.563015238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.251124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.251204 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.251225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.251255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.251279 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.354784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.355303 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.355457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.355628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.355795 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.367213 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:15:38.80178934 +0000 UTC Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.396537 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.396573 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.396595 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.396992 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.397138 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.397206 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.397312 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:23 crc kubenswrapper[4736]: E0214 10:42:23.397430 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.458907 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.459578 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.459714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.459930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.460095 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.563413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.563496 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.563519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.563550 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.563573 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.666998 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.667062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.667089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.667119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.667143 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.770273 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.770325 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.770342 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.770410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.770427 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.872569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.872627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.872644 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.872667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.872684 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.975884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.975971 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.975997 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.976028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:23 crc kubenswrapper[4736]: I0214 10:42:23.976058 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:23Z","lastTransitionTime":"2026-02-14T10:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.078836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.078916 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.078939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.078965 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.078983 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.181439 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.181489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.181508 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.181532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.181550 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.284040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.284092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.284108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.284133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.284149 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.367971 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:56:05.894526108 +0000 UTC Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.388138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.388190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.388208 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.388234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.388251 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.491385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.491435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.491446 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.491465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.491478 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.595184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.595257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.595274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.595299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.595318 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.697931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.697993 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.698010 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.698039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.698057 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.801966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.802029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.802045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.802071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.802087 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.905407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.905468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.905486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.905512 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:24 crc kubenswrapper[4736]: I0214 10:42:24.905530 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:24Z","lastTransitionTime":"2026-02-14T10:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.008431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.008508 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.008534 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.008567 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.008587 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.111943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.112019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.112042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.112076 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.112099 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.216035 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.216105 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.216130 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.216158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.216183 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.318849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.318906 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.318924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.318947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.318964 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.368650 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:36:52.912135096 +0000 UTC Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.397099 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.397213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.397300 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:25 crc kubenswrapper[4736]: E0214 10:42:25.397303 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.397352 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:25 crc kubenswrapper[4736]: E0214 10:42:25.397492 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:25 crc kubenswrapper[4736]: E0214 10:42:25.397614 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:25 crc kubenswrapper[4736]: E0214 10:42:25.397721 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.421257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.421314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.421331 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.421356 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.421373 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.531448 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.531497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.531513 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.531537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.531554 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.633851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.633902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.633919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.633944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.633968 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.736722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.736809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.736828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.736852 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.736871 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.864873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.865330 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.865506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.865711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.865927 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.968926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.968992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.969014 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.969043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:25 crc kubenswrapper[4736]: I0214 10:42:25.969064 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:25Z","lastTransitionTime":"2026-02-14T10:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.072005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.072076 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.072096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.072122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.072139 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.175193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.175322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.175343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.175369 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.175385 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.277838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.277900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.277916 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.277945 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.277968 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.372942 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:14:02.005765428 +0000 UTC Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.380536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.380847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.380944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.381041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.381128 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.484732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.484856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.484886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.484924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.484952 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.587203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.587249 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.587266 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.587290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.587307 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.689492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.689558 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.689567 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.689580 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.689587 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.792380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.792425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.792435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.792453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.792464 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.872762 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.872797 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.872807 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.872822 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.872834 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.889028 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:26Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.893355 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.893434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.893454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.893485 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.893503 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.909894 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:26Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.914699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.914820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.914850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.914869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.914880 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.931950 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:26Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.936486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.936557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.936577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.936604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.936622 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.949957 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:26Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.954546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.954595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.954609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.954628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.954641 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.969069 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:26Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:26 crc kubenswrapper[4736]: E0214 10:42:26.969296 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.971338 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.971377 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.971389 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.971407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:26 crc kubenswrapper[4736]: I0214 10:42:26.971419 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:26Z","lastTransitionTime":"2026-02-14T10:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.091080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.091143 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.091160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.091187 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.091208 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.194824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.194904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.194925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.194951 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.194977 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.298354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.298432 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.298451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.298477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.298495 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.373470 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:34:21.401625572 +0000 UTC Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.397076 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.397094 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:27 crc kubenswrapper[4736]: E0214 10:42:27.397173 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.397299 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.397343 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:27 crc kubenswrapper[4736]: E0214 10:42:27.397377 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:27 crc kubenswrapper[4736]: E0214 10:42:27.397521 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:27 crc kubenswrapper[4736]: E0214 10:42:27.397589 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.402804 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.402850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.402867 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.402891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.402910 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.506178 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.506258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.506278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.506301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.506319 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.609258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.609350 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.609363 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.609390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.609406 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.712669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.712713 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.712727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.712769 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.712782 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.815695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.815777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.815797 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.815818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.815831 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.919255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.919316 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.919328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.919344 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:27 crc kubenswrapper[4736]: I0214 10:42:27.919359 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:27Z","lastTransitionTime":"2026-02-14T10:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.022713 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.022826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.022845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.022873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.022889 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.126630 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.126686 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.126695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.126708 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.126718 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.229354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.229388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.229396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.229409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.229418 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.332030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.332436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.332617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.332849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.333013 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.374592 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:43:09.818230523 +0000 UTC Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.436156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.436221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.436245 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.436274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.436297 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.539731 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.539869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.539893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.539923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.539944 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.643568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.643629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.643653 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.643680 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.643701 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.747250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.747307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.747323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.747349 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.747371 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.849988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.850042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.850067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.850094 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.850115 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.953610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.953710 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.953727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.953816 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:28 crc kubenswrapper[4736]: I0214 10:42:28.953841 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:28Z","lastTransitionTime":"2026-02-14T10:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.058951 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.059014 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.059030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.059052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.059072 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.162100 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.162148 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.162163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.162183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.162198 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.265002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.265069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.265086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.265113 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.265130 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.368076 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.368125 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.368143 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.368168 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.368188 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.374893 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:29:04.781774854 +0000 UTC Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.396410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:29 crc kubenswrapper[4736]: E0214 10:42:29.396918 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.396472 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.396504 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:29 crc kubenswrapper[4736]: E0214 10:42:29.397113 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.396505 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:29 crc kubenswrapper[4736]: E0214 10:42:29.397187 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:29 crc kubenswrapper[4736]: E0214 10:42:29.397300 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.470693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.470787 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.470809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.470833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.470849 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.574447 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.574507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.574518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.574539 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.574558 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.677339 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.677400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.677418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.677441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.677459 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.780343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.780385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.780396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.780414 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.780426 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.883876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.884296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.884456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.884615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.884786 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.987736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.987838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.987860 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.987890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:29 crc kubenswrapper[4736]: I0214 10:42:29.987911 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:29Z","lastTransitionTime":"2026-02-14T10:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.090840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.090914 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.090932 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.090957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.090973 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.193854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.193897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.193917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.193940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.193959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.296843 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.296914 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.296938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.296967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.296985 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.375405 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:53:28.709863313 +0000 UTC Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.399833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.399886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.399906 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.399930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.399947 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.431121 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.450689 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.472502 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.499997 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.501972 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.502039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.502052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.502089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.502130 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.512265 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.525128 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.535573 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.546824 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.560400 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.583007 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.594998 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.605023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.605079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.605091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.605106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.605116 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.618904 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.635708 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.654054 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.682858 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.694810 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.707673 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.707701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.707711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.707725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.707737 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.717814 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.730632 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:30Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.810259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.810321 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.810336 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.810359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.810374 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.913237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.913302 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.913319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.913343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:30 crc kubenswrapper[4736]: I0214 10:42:30.913360 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:30Z","lastTransitionTime":"2026-02-14T10:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.016501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.016569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.016589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.016619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.016640 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.118952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.118987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.119000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.119014 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.119025 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.221180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.221243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.221267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.221303 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.221329 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.324817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.324907 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.324925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.324948 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.324964 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.375670 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:16:03.863885924 +0000 UTC Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.396945 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.396978 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.396989 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.396947 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:31 crc kubenswrapper[4736]: E0214 10:42:31.397267 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:31 crc kubenswrapper[4736]: E0214 10:42:31.397422 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:31 crc kubenswrapper[4736]: E0214 10:42:31.397524 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:31 crc kubenswrapper[4736]: E0214 10:42:31.397684 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.435729 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.435989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.436366 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.436600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.436814 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.539869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.540158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.540237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.540323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.540389 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.643274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.643307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.643318 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.643349 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.643361 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.746409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.747242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.747278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.747307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.747329 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.849883 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.849940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.849958 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.849986 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.850006 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.953419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.953549 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.953581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.953617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:31 crc kubenswrapper[4736]: I0214 10:42:31.953641 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:31Z","lastTransitionTime":"2026-02-14T10:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.056203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.056288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.056305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.056360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.056381 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.158639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.158921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.159002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.159077 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.159149 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.262292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.262541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.262608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.262668 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.262729 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.365564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.365685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.365697 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.365715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.365728 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.376637 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:13:44.15796665 +0000 UTC Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.469021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.469078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.469095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.469118 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.469135 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.571851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.571892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.571902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.571917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.571925 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.674601 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.674642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.674654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.674673 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.674685 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.777957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.778004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.778020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.778043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.778063 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.881456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.881520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.881537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.881561 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.881581 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.985060 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.985143 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.985171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.985206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:32 crc kubenswrapper[4736]: I0214 10:42:32.985230 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:32Z","lastTransitionTime":"2026-02-14T10:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.087712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.087788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.087825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.087845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.087857 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.191030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.191083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.191095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.191115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.191126 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.294810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.294876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.294893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.294920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.294938 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.377402 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:34:00.904011523 +0000 UTC Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.396197 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.396262 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.396286 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:33 crc kubenswrapper[4736]: E0214 10:42:33.396379 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.396384 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:33 crc kubenswrapper[4736]: E0214 10:42:33.396489 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:33 crc kubenswrapper[4736]: E0214 10:42:33.396617 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:33 crc kubenswrapper[4736]: E0214 10:42:33.396704 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.398401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.398445 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.398461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.398483 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.398500 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.500966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.501023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.501040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.501061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.501077 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.603802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.603847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.603858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.603878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.603890 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.706993 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.707053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.707070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.707095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.707122 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.810964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.811020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.811041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.811070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.811092 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.914820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.914877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.914897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.914933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:33 crc kubenswrapper[4736]: I0214 10:42:33.914953 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:33Z","lastTransitionTime":"2026-02-14T10:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.018063 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.018518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.018709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.018934 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.019108 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.131557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.131631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.131650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.131675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.131694 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.235421 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.235505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.235521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.235542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.235557 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.340475 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.340562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.340579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.340604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.340627 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.378171 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:02:43.409924425 +0000 UTC Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.443837 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.443900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.443917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.443942 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.443959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.546276 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.546537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.546642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.546776 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.546874 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.649266 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.649308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.649321 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.649351 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.649362 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.757208 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.757269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.757287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.757312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.757328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.804985 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/2.log" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.806212 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/1.log" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.808726 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440" exitCode=1 Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.808776 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.808947 4736 scope.go:117] "RemoveContainer" containerID="dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.809625 4736 scope.go:117] "RemoveContainer" containerID="0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440" Feb 14 10:42:34 crc kubenswrapper[4736]: E0214 10:42:34.809924 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.838793 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.852236 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.860583 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.860885 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.861003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.861123 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.861235 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.872761 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.895183 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.911988 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.923270 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.933299 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.946006 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964455 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964713 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964724 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964758 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.964771 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:34Z","lastTransitionTime":"2026-02-14T10:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.979160 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:34 crc kubenswrapper[4736]: I0214 10:42:34.992942 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:34Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.002585 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.014731 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.025501 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.034261 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.045044 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.058309 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.067030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.067065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.067089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.067103 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.067111 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.070006 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:35Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.168841 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.168889 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.168904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.168922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.168934 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.271932 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.271970 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.271978 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.271992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.272001 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.375429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.375510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.375533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.375562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.375585 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.378625 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:07:47.319807149 +0000 UTC Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.396627 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.396651 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.396651 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:35 crc kubenswrapper[4736]: E0214 10:42:35.396810 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.396866 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:35 crc kubenswrapper[4736]: E0214 10:42:35.397010 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:35 crc kubenswrapper[4736]: E0214 10:42:35.397168 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:35 crc kubenswrapper[4736]: E0214 10:42:35.397213 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.478099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.478153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.478170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.478200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.478217 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.580849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.580890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.580900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.580919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.580933 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.683993 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.684041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.684058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.684081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.684100 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.787460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.787501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.787512 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.787529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.787540 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.814805 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/2.log" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.889381 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.889414 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.889426 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.889440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.889452 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.992096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.992131 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.992139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.992153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:35 crc kubenswrapper[4736]: I0214 10:42:35.992164 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:35Z","lastTransitionTime":"2026-02-14T10:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.094616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.094641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.094651 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.094662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.094670 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.196892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.196912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.196920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.196929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.196937 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.301471 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.301549 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.301572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.301596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.301612 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.379161 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:47:12.608840848 +0000 UTC Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.404154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.404553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.404875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.405137 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.405390 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.508083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.508141 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.508155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.508177 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.508191 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.611056 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.611124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.611153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.611182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.611203 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.713121 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.713177 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.713201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.713233 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.713255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.815603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.815645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.815658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.815675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.815687 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.918551 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.918647 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.918701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.918728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:36 crc kubenswrapper[4736]: I0214 10:42:36.918841 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:36Z","lastTransitionTime":"2026-02-14T10:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.022390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.022436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.022451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.022473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.022487 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.099332 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.099488 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.099549 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:43:09.099526738 +0000 UTC m=+99.468154126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.122211 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.122257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.122270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.122287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.122298 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.134204 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:37Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.137375 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.137433 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.137445 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.137460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.137492 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.149328 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:37Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.152831 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.152989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.153076 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.153143 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.153209 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.163108 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:37Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.165978 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.166005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.166015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.166027 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.166035 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.176055 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:37Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.178956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.178987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.178996 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.179008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.179017 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.189173 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:37Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.189440 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.190828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.190864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.190872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.190886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.190894 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.293093 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.293200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.293216 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.293281 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.293297 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.379779 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:13:38.571244082 +0000 UTC Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.395721 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.395798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.395819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.395848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.395870 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.396674 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.396727 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.396778 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.397170 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.397527 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.397924 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.398067 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:37 crc kubenswrapper[4736]: E0214 10:42:37.398190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.498533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.498569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.498578 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.498600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.498612 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.602039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.602083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.602092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.602111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.602123 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.704459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.704489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.704499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.704512 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.704520 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.810314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.810341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.810349 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.810361 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.810370 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.912666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.912711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.912723 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.912741 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:37 crc kubenswrapper[4736]: I0214 10:42:37.912768 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:37Z","lastTransitionTime":"2026-02-14T10:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.015174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.015222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.015232 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.015247 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.015255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.117873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.117925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.117935 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.117950 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.117959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.219923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.219987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.220004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.220027 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.220044 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.322381 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.322669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.322735 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.322819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.322883 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.380723 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:24:02.581420754 +0000 UTC Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.407074 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.425382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.425424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.425433 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.425449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.425458 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.527480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.527517 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.527527 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.527542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.527554 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.629732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.630019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.630103 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.630198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.630281 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.731886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.731935 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.731947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.731969 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.731980 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.833641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.833679 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.833688 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.833703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.833711 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.936564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.936600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.936611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.936629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:38 crc kubenswrapper[4736]: I0214 10:42:38.936638 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:38Z","lastTransitionTime":"2026-02-14T10:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.039640 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.039683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.039692 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.039709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.039721 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.142882 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.142946 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.142957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.142974 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.142984 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.244726 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.244784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.244793 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.244808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.244817 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.347210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.347264 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.347276 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.347295 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.347309 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.380883 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:02:33.917709249 +0000 UTC Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.396246 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.396285 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.396309 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.396329 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:39 crc kubenswrapper[4736]: E0214 10:42:39.397007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:39 crc kubenswrapper[4736]: E0214 10:42:39.397169 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:39 crc kubenswrapper[4736]: E0214 10:42:39.397124 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:39 crc kubenswrapper[4736]: E0214 10:42:39.397302 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.450212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.450540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.450716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.450909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.451044 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.554139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.554556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.554716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.554912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.555056 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.658213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.658267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.658275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.658290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.658299 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.761504 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.761910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.762052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.762274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.762436 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.865229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.865285 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.865296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.865318 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.865332 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.968202 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.968248 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.968260 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.968277 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:39 crc kubenswrapper[4736]: I0214 10:42:39.968291 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:39Z","lastTransitionTime":"2026-02-14T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.071025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.071082 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.071099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.071125 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.071142 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.173410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.173657 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.173667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.173682 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.173690 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.276658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.276763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.276782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.276835 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.276853 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.379849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.379887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.379902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.379917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.379927 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.381951 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:05:30.78566906 +0000 UTC Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.409029 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.421114 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.431358 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.440359 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.455208 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.470037 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483031 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483077 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483085 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.483701 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.494555 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.508301 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.517065 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.528161 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.540336 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.551349 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.569350 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.580304 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.584893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.584920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.584929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.584940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.584948 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.601544 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.614798 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.625563 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.633921 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.687522 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.687559 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.687575 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.687591 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.687601 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.790379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.790415 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.790427 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.790443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.790455 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.832534 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/0.log" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.832579 4736 generic.go:334] "Generic (PLEG): container finished" podID="db7224ab-d0ab-49e3-9154-4d9047057681" containerID="e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1" exitCode=1 Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.832605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerDied","Data":"e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.832951 4736 scope.go:117] "RemoveContainer" containerID="e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.843451 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.854035 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.865841 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.876817 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.886458 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.894249 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.894294 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.894306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.894322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.894333 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.898157 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.937797 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.956220 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.973919 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.989027 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.996776 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.996824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.996837 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.996861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:40 crc kubenswrapper[4736]: I0214 10:42:40.996875 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:40Z","lastTransitionTime":"2026-02-14T10:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.000046 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:40Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.010719 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.025585 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.040581 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.062061 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.079823 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.100390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.100469 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.100481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.100499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.100510 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.103093 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.125092 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.138978 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.203052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.203090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.203099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.203115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.203125 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.305558 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.305595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.305607 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.305622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.305633 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.382093 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:53:04.740462728 +0000 UTC Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.396368 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:41 crc kubenswrapper[4736]: E0214 10:42:41.396512 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.396707 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:41 crc kubenswrapper[4736]: E0214 10:42:41.396782 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.396927 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:41 crc kubenswrapper[4736]: E0214 10:42:41.396999 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.397120 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:41 crc kubenswrapper[4736]: E0214 10:42:41.397176 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.407887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.407912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.407921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.407935 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.407946 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.510979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.511019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.511030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.511047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.511058 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.613381 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.613423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.613432 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.613448 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.613459 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.716546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.716614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.716636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.716667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.716684 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.818989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.819029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.819040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.819056 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.819067 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.836482 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/0.log" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.836527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerStarted","Data":"8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.848692 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.862445 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.871979 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.884457 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.897478 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.907933 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.921083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.921122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.921133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.921151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.921160 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:41Z","lastTransitionTime":"2026-02-14T10:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.929090 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5149d737e77378c19734999007dbaf1c3521bde12f030f91d631f7a3f88fe4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:02Z\\\",\\\"message\\\":\\\"go:208] Removed *v1.Node event handler 2\\\\nI0214 10:42:02.568122 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 10:42:02.568128 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 10:42:02.568136 6057 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 10:42:02.568419 6057 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568667 6057 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568736 6057 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 10:42:02.568895 6057 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 10:42:02.569119 6057 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 10:42:02.569620 6057 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.940524 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.960968 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.974899 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.987540 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:41 crc kubenswrapper[4736]: I0214 10:42:41.998709 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:41Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.010970 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.020943 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.023388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.023414 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.023422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.023435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.023443 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.031761 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.042444 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.058563 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.071618 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.083694 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:42Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.129985 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.130040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.130051 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.130338 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.130351 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.232802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.233308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.233321 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.233450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.233474 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.336022 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.336062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.336070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.336086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.336097 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.382821 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:00:28.78639409 +0000 UTC Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.438323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.438552 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.438616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.438679 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.438732 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.541023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.541069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.541082 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.541105 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.541131 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.643518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.643573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.643582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.643597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.643608 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.745398 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.745432 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.745440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.745453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.745462 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.847792 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.847824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.847833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.847846 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.847855 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.949891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.949940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.949957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.949978 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:42 crc kubenswrapper[4736]: I0214 10:42:42.949994 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:42Z","lastTransitionTime":"2026-02-14T10:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.053738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.053851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.053875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.053904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.053926 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.156724 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.156824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.156851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.156884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.156906 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.259420 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.259481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.259498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.259563 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.259587 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.361434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.361798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.361950 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.362087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.362216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.383932 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:00:15.957838805 +0000 UTC Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.396194 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.396226 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:43 crc kubenswrapper[4736]: E0214 10:42:43.396300 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.396402 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:43 crc kubenswrapper[4736]: E0214 10:42:43.396453 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:43 crc kubenswrapper[4736]: E0214 10:42:43.396577 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.396684 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:43 crc kubenswrapper[4736]: E0214 10:42:43.397006 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.465474 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.465520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.465536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.465557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.465574 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.568128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.568192 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.568201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.568213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.568221 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.670728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.670975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.671131 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.671223 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.671307 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.774071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.774140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.774153 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.774170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.774182 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.876952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.877035 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.877061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.877097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.877121 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.979484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.979523 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.979532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.979546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:43 crc kubenswrapper[4736]: I0214 10:42:43.979555 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:43Z","lastTransitionTime":"2026-02-14T10:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.081650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.081685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.081693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.081706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.081715 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.183605 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.183649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.183665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.183688 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.183705 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.286957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.287000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.287016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.287038 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.287054 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.384047 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:45:13.230132244 +0000 UTC Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.388573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.388604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.388618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.388633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.388643 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.490986 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.491025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.491033 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.491045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.491055 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.593086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.593125 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.593136 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.593155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.593171 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.695415 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.695478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.695489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.695505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.695516 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.797493 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.797569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.797582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.797599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.797610 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.899072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.899101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.899110 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.899124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:44 crc kubenswrapper[4736]: I0214 10:42:44.899132 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:44Z","lastTransitionTime":"2026-02-14T10:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.001661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.001714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.001725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.001763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.001778 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.103525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.103557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.103569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.103586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.103597 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.205963 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.205999 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.206008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.206021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.206030 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.307784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.307839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.307855 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.307880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.307897 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.385030 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:22:50.23372563 +0000 UTC Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.396417 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.396460 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.396520 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.396415 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:45 crc kubenswrapper[4736]: E0214 10:42:45.396545 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:45 crc kubenswrapper[4736]: E0214 10:42:45.396620 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:45 crc kubenswrapper[4736]: E0214 10:42:45.396780 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:45 crc kubenswrapper[4736]: E0214 10:42:45.396823 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.410126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.410177 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.410193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.410213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.410232 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.512498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.512527 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.512538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.512554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.512566 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.615868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.615938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.615955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.615978 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.615994 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.718832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.718915 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.718940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.718970 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.718992 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.822443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.822513 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.822530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.823025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.823079 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.925941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.925985 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.925997 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.926016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:45 crc kubenswrapper[4736]: I0214 10:42:45.926032 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:45Z","lastTransitionTime":"2026-02-14T10:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.028040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.028085 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.028097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.028115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.028127 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.130265 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.130290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.130299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.130311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.130320 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.231948 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.231990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.232001 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.232018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.232029 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.334647 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.334683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.334691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.334706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.334715 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.386092 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:46:57.704543364 +0000 UTC Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.436619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.436667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.436679 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.436696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.436709 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.548974 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.549001 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.549009 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.549021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.549029 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.651308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.651571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.651649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.651712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.651807 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.754592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.754843 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.754920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.755039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.755120 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.856976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.857011 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.857021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.857039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.857052 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.959128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.959182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.959197 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.959217 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:46 crc kubenswrapper[4736]: I0214 10:42:46.959231 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:46Z","lastTransitionTime":"2026-02-14T10:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.061762 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.062072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.062174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.062457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.062548 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.166108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.166174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.166198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.166230 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.166253 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.240419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.240461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.240477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.240499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.240515 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.254021 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:47Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.256965 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.257004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.257017 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.257034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.257046 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.276815 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:47Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.280917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.280966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.280983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.281005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.281023 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.298056 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:47Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.301214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.301258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.301276 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.301300 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.301317 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.316709 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:47Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.319598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.319634 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.319645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.319661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.319673 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.329853 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:47Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.329957 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.331291 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.331316 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.331324 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.331335 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.331343 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.387142 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:28:27.436608309 +0000 UTC Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.396450 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.396463 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.396509 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.396539 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.396621 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.396698 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.396860 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:47 crc kubenswrapper[4736]: E0214 10:42:47.396940 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.433303 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.433358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.433371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.433404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.433417 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.535501 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.535553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.535569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.535592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.535609 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.638091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.638171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.638194 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.638643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.638961 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.741106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.741146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.741154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.741170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.741182 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.846805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.846850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.846861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.846876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.846887 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.948988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.949071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.949083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.949100 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:47 crc kubenswrapper[4736]: I0214 10:42:47.949113 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:47Z","lastTransitionTime":"2026-02-14T10:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.051392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.051438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.051449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.051468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.051481 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.153199 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.153238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.153250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.153296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.153307 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.256049 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.256094 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.256111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.256133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.256148 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.385463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.385498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.385509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.385524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.385536 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.387824 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:57:22.511354757 +0000 UTC Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.487385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.487428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.487443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.487656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.487668 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.590532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.590571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.590581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.590597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.590607 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.692955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.692981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.692989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.693003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.693011 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.796095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.796137 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.796145 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.796159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.796171 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.898869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.898903 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.898913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.898928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:48 crc kubenswrapper[4736]: I0214 10:42:48.898993 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:48Z","lastTransitionTime":"2026-02-14T10:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.000872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.000904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.000912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.000925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.000934 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.102970 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.103012 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.103030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.103050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.103066 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.205384 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.205429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.205441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.205458 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.205470 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.307853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.307886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.307894 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.307908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.307918 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.388576 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:31:40.464015419 +0000 UTC Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.396368 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.396483 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.396506 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.396367 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:49 crc kubenswrapper[4736]: E0214 10:42:49.396622 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:49 crc kubenswrapper[4736]: E0214 10:42:49.396807 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:49 crc kubenswrapper[4736]: E0214 10:42:49.396869 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:49 crc kubenswrapper[4736]: E0214 10:42:49.397642 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.398044 4736 scope.go:117] "RemoveContainer" containerID="0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440" Feb 14 10:42:49 crc kubenswrapper[4736]: E0214 10:42:49.398424 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.410257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.410313 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.410333 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.410356 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.410375 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.415421 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.437264 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.453000 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.468665 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.483702 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.499223 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512109 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512199 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512264 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.512278 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.524283 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.533927 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.547802 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.556863 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.568001 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.577407 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.584956 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.600291 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.609018 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.616201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.616251 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.616261 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.616277 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.616287 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.633826 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.645925 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.655877 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.719043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.719101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.719135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.719172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.719194 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.821718 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.821809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.821832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.821861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.821884 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.924203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.924238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.924248 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.924263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:49 crc kubenswrapper[4736]: I0214 10:42:49.924274 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:49Z","lastTransitionTime":"2026-02-14T10:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.027664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.027716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.027732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.027784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.027803 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.130937 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.131039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.131092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.131123 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.131148 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.234182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.234238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.234255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.234278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.234295 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.336643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.336697 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.336716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.336738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.336789 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.389368 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:25:23.639233117 +0000 UTC Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.428653 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.440101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.440179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.440251 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.440287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.440305 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.450793 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.471183 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.495377 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.512198 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.523899 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.542950 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.543981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.544042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.544058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.544079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.544094 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.559422 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.580234 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.595706 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.611467 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.624437 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.634027 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.646562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.646617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.646634 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.646656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.646675 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.651581 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.665543 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.675832 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.694004 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.716028 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.728251 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.748912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.748971 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.748989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.749015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.749032 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.861262 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.861711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.861925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.862104 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.862314 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.966287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.966341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.966358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.966381 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:50 crc kubenswrapper[4736]: I0214 10:42:50.966398 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:50Z","lastTransitionTime":"2026-02-14T10:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.069905 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.069966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.069989 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.070016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.070035 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.174260 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.174646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.174922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.175132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.175323 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.279069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.279152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.279176 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.279212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.279237 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.381875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.381919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.381930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.381946 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.381958 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.390426 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:12:29.194083632 +0000 UTC Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.396879 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.397059 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.397157 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:51 crc kubenswrapper[4736]: E0214 10:42:51.397368 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.397440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:51 crc kubenswrapper[4736]: E0214 10:42:51.397923 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:51 crc kubenswrapper[4736]: E0214 10:42:51.397788 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:51 crc kubenswrapper[4736]: E0214 10:42:51.397628 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.485393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.485789 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.485972 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.486863 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.487356 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.590699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.591086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.591286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.591467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.591614 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.694289 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.694334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.694352 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.694378 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.694396 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.797795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.797869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.797887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.797937 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.797953 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.901305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.901830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.902115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.902325 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:51 crc kubenswrapper[4736]: I0214 10:42:51.902583 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:51Z","lastTransitionTime":"2026-02-14T10:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.006186 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.006254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.006274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.006304 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.006325 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.108696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.108819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.108840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.108864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.108882 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.212918 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.213524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.213546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.213572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.213594 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.316842 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.316924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.316935 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.316958 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.316972 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.390943 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:39:58.000113022 +0000 UTC Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.419922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.419984 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.420006 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.420037 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.420059 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.524369 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.524424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.524444 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.524470 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.524487 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.627422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.627505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.627519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.627654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.627674 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.731204 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.731252 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.731274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.731298 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.731310 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.834663 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.834703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.834715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.834732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.834763 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.937244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.937279 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.937288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.937302 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:52 crc kubenswrapper[4736]: I0214 10:42:52.937311 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:52Z","lastTransitionTime":"2026-02-14T10:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.040120 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.040156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.040166 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.040181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.040191 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.142719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.142798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.142814 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.142839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.142855 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.245139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.245200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.245223 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.245254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.245272 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.348481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.348782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.348806 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.348836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.348857 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.391634 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:10:01.300533865 +0000 UTC Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.397072 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.397132 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.397227 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.397314 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:53 crc kubenswrapper[4736]: E0214 10:42:53.397447 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:53 crc kubenswrapper[4736]: E0214 10:42:53.397550 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:53 crc kubenswrapper[4736]: E0214 10:42:53.397600 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:53 crc kubenswrapper[4736]: E0214 10:42:53.398147 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.451157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.451198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.451206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.451220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.451229 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.554269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.554346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.554368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.554397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.554445 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.657322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.657393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.657643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.657676 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.657698 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.760770 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.760826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.760849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.760878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.760901 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.865209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.865255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.865272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.865294 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.865311 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.968096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.968146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.968160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.968180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:53 crc kubenswrapper[4736]: I0214 10:42:53.968195 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:53Z","lastTransitionTime":"2026-02-14T10:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.073407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.073451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.073463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.073479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.073494 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.175457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.175524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.175545 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.175572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.175596 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.279035 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.279398 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.279554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.279724 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.279934 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.382762 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.382816 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.382834 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.382854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.382867 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.392398 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:19:45.263118043 +0000 UTC Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.485594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.485655 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.485672 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.485699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.485717 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.589124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.589180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.589196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.589214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.589227 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.691494 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.691531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.691542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.691561 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.691572 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.794473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.794568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.794590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.794622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.794646 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.897026 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.897108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.897141 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.897171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:54 crc kubenswrapper[4736]: I0214 10:42:54.897192 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:54Z","lastTransitionTime":"2026-02-14T10:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.000267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.000329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.000351 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.000388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.000407 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.103272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.103586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.103720 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.103919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.104029 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.163665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.163840 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:43:59.16381844 +0000 UTC m=+149.532445818 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.206858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.206970 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.206981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.206999 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.207011 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.265903 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.265986 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.266028 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.266081 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266139 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266243 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266267 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266265 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266285 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266312 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266332 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266279 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266276 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:43:59.266240023 +0000 UTC m=+149.634867431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266472 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:43:59.266444458 +0000 UTC m=+149.635071866 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266516 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:43:59.26649765 +0000 UTC m=+149.635125098 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.266564 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:43:59.266546491 +0000 UTC m=+149.635173899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.310903 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.310966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.311030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.311060 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.311079 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.392821 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:46:43.51531895 +0000 UTC Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.396307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.396357 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.396311 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.396391 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.396481 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.396652 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.396709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:55 crc kubenswrapper[4736]: E0214 10:42:55.396794 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.413726 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.413824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.413847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.413878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.413903 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.516285 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.516327 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.516339 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.516356 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.516367 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.618289 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.618358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.618376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.618402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.618427 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.721658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.721711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.721727 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.721777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.721795 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.824468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.824524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.824541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.824566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.824583 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.927842 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.927920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.927939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.927964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:55 crc kubenswrapper[4736]: I0214 10:42:55.927983 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:55Z","lastTransitionTime":"2026-02-14T10:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.030243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.030280 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.030290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.030305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.030316 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.132577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.132618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.132629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.132646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.132657 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.235444 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.235506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.235522 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.235545 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.235563 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.337789 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.337830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.337840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.337856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.337868 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.393345 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:03:40.221113904 +0000 UTC Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.440004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.440080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.440106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.440135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.440158 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.542406 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.542448 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.542462 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.542482 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.542497 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.646002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.646080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.646102 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.646135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.646158 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.749713 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.750154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.750172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.750198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.750216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.852967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.853023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.853039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.853065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.853085 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.955786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.955878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.955891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.955908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:56 crc kubenswrapper[4736]: I0214 10:42:56.955919 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:56Z","lastTransitionTime":"2026-02-14T10:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.059514 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.059819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.059838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.059862 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.059885 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.163138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.163189 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.163205 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.163230 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.163247 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.266386 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.266443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.266468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.266498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.266521 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.369801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.369853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.369872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.369899 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.369919 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.393830 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:21:48.336781829 +0000 UTC Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.396125 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.396300 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.396581 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.396688 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.396952 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.397051 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.397237 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.397330 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.472603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.472810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.472865 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.472890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.472908 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.576455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.576517 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.576536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.576561 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.576578 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.679798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.679853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.679875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.679902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.679917 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.716853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.716913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.716928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.716954 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.716972 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.733123 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.738347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.738441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.738461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.738516 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.738533 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.759553 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.764658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.764725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.764737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.764779 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.764791 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.783511 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.787588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.787616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.787624 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.787638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.787649 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.803793 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.808423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.808453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.808462 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.808476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.808487 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.827230 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 10:42:57 crc kubenswrapper[4736]: E0214 10:42:57.827420 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.829082 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.829175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.829191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.829218 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.829235 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.932983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.933048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.933065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.933088 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:57 crc kubenswrapper[4736]: I0214 10:42:57.933111 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:57Z","lastTransitionTime":"2026-02-14T10:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.036225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.036275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.036292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.036315 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.036332 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.138458 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.138518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.138534 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.138559 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.138575 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.242200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.242251 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.242259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.242274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.242284 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.344802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.344846 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.344857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.344874 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.344902 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.394243 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:57:37.493335498 +0000 UTC Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.447045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.447079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.447088 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.447101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.447111 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.549992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.550042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.550052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.550069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.550082 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.653067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.653116 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.653126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.653141 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.653150 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.756139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.756195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.756211 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.756234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.756251 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.859722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.859839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.859862 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.859892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.859912 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.962711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.962919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.962943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.962968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:58 crc kubenswrapper[4736]: I0214 10:42:58.962984 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:58Z","lastTransitionTime":"2026-02-14T10:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.065596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.065649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.065666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.065690 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.065709 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.168768 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.168800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.168810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.168823 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.168832 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.271710 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.271737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.271769 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.271782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.271790 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.374352 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.374392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.374407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.374428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.374443 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.394704 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:16:53.027040555 +0000 UTC Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.396971 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.397046 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.397187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:42:59 crc kubenswrapper[4736]: E0214 10:42:59.397185 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.397189 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:42:59 crc kubenswrapper[4736]: E0214 10:42:59.397317 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:42:59 crc kubenswrapper[4736]: E0214 10:42:59.397431 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:42:59 crc kubenswrapper[4736]: E0214 10:42:59.397520 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.477284 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.477326 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.477343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.477371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.477392 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.580206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.580238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.580250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.580269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.580281 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.682397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.682438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.682448 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.682461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.682471 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.784922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.784998 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.785018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.785044 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.785066 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.888839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.888888 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.888901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.888925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.888936 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.992051 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.992194 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.992218 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.992250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:42:59 crc kubenswrapper[4736]: I0214 10:42:59.992272 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:42:59Z","lastTransitionTime":"2026-02-14T10:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.095249 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.095518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.095616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.095714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.095823 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.199317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.199804 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.199995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.200247 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.200406 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.304106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.304161 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.304179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.304201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.304217 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.394829 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:11:45.352471683 +0000 UTC Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.398542 4736 scope.go:117] "RemoveContainer" containerID="0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.408433 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.408484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.408503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.408528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.408545 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.420785 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.448333 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.468894 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.483671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.500766 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.511506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.511543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.511553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.511571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.511585 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.514983 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.528689 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.540455 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.556726 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.570028 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.581016 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.600799 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.613619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.613646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.613654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.613667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.613675 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.620589 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.639999 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.658499 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.669883 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.682862 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.701420 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.714588 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.715496 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.715531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.715541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.715556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.715567 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.817184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.817236 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.817251 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.817270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.817285 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.902143 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/2.log" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.904291 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.904727 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919648 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:00Z","lastTransitionTime":"2026-02-14T10:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.919661 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.933082 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.942759 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.953183 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.962793 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.971971 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:00 crc kubenswrapper[4736]: I0214 10:43:00.992010 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.003161 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.021726 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.021786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.021795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.021808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.021821 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.023486 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.034671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.044916 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.058864 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.071605 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.081171 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.091434 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.099854 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.112889 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124233 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124296 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124372 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124387 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.124397 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.135603 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.226666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.226698 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.226709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.226723 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.226733 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.328504 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.328579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.328596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.328618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.328633 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.395131 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:44:07.615422918 +0000 UTC Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.396412 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.396432 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:01 crc kubenswrapper[4736]: E0214 10:43:01.396505 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.396472 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.396447 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:01 crc kubenswrapper[4736]: E0214 10:43:01.396725 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:01 crc kubenswrapper[4736]: E0214 10:43:01.396785 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:01 crc kubenswrapper[4736]: E0214 10:43:01.396860 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.431775 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.431852 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.431870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.431896 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.431913 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.535055 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.535117 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.535134 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.535159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.535176 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.638409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.638497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.638533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.638568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.638589 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.741521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.741590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.741615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.741645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.741669 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.845206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.845262 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.845282 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.845308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.845328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.911883 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.913483 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/2.log" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.917802 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" exitCode=1 Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.917853 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.917896 4736 scope.go:117] "RemoveContainer" containerID="0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.919103 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:01 crc kubenswrapper[4736]: E0214 10:43:01.919360 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.932250 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.949181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.949256 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.949281 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.949313 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.949340 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:01Z","lastTransitionTime":"2026-02-14T10:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.958245 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.976075 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:01 crc kubenswrapper[4736]: I0214 10:43:01.990197 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.015392 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.030438 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.043414 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.052431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.052492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.052505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.052531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.052546 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.065371 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f77f8241f7248667a86df45db841d10222092f14a2971b6faf94c71dbd1b440\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:34Z\\\",\\\"message\\\":\\\"-config-operator-7777fb866f-qr5lk is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:nonroot-v2 openshift.io/scc:nonroot-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029093 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-console-operator/console-operator-58897d9998-tckpd: failed to check if pod openshift-console-operator/console-operator-58897d9998-tckpd is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nI0214 10:42:34.029110 6258 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr: failed to check if pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr is in primary UDN: could not find OVN pod annotation in map[]\\\\nE0214 10:42:34.093365 6258 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0214 10:42:34.094554 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0214 10:42:34.094656 6258 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:42:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:43:01Z\\\",\\\"message\\\":\\\"-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 10:43:01.238454 6778 services_controller.go:451] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-migrator-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 10:43:01.238500 6778 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-8fm57 after 0 failed attempt(s)\\\\nI0214 10:43:01.238587 6778 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-8fm57\\\\nF0214 10:43:01.238101 6778 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.081045 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.100331 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.113545 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.126815 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.135353 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.145829 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.154281 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.154323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.154331 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.154344 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.154353 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.159756 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.171099 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.179639 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.189791 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.199638 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.256560 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.256601 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.256611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.256627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.256637 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.360128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.360191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.360210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.360236 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.360255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.396000 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:02:45.832707245 +0000 UTC Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.464263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.464298 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.464306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.464319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.464327 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.567308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.567357 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.567366 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.567382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.567394 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.670566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.670971 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.671140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.671288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.671425 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.774990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.775183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.775237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.775280 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.775307 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.878920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.878983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.879000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.879028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.879049 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.924541 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.931118 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:02 crc kubenswrapper[4736]: E0214 10:43:02.931924 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.948162 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.970000 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.982576 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.982645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.982666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.982695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.982791 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:02Z","lastTransitionTime":"2026-02-14T10:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:02 crc kubenswrapper[4736]: I0214 10:43:02.988919 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.010000 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.029206 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.048879 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.069195 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.085699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.085777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.085794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.085821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.085838 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.090304 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.103008 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.114427 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.126881 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.136685 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.151923 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.172150 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.188259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.188316 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.188332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.188356 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.188371 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.199149 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.214038 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.231629 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.253260 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:43:01Z\\\",\\\"message\\\":\\\"-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 10:43:01.238454 6778 services_controller.go:451] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-migrator-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 10:43:01.238500 6778 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-8fm57 after 0 failed attempt(s)\\\\nI0214 10:43:01.238587 6778 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-8fm57\\\\nF0214 10:43:01.238101 6778 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.266520 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:03Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.290695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.290762 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.290774 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.290794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.290806 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.393960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.394027 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.394043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.394070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.394087 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.396148 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:54:59.225909663 +0000 UTC Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.396249 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.396534 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.396557 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.396586 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:03 crc kubenswrapper[4736]: E0214 10:43:03.396669 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:03 crc kubenswrapper[4736]: E0214 10:43:03.396715 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:03 crc kubenswrapper[4736]: E0214 10:43:03.396839 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:03 crc kubenswrapper[4736]: E0214 10:43:03.396945 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.497848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.497918 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.497941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.497967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.497983 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.601172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.601226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.601244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.601266 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.601283 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.703765 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.703838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.703850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.703870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.703882 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.806833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.806879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.806890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.806907 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.806921 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.914087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.914156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.914191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.914222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:03 crc kubenswrapper[4736]: I0214 10:43:03.914247 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:03Z","lastTransitionTime":"2026-02-14T10:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.018080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.018141 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.018158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.018182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.018199 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.121576 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.121644 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.121665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.121691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.121709 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.224144 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.224215 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.224241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.224271 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.224290 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.327850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.328151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.328278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.328441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.328566 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.396474 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:47:55.065665726 +0000 UTC Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.431117 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.431438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.431603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.431877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.432284 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.535801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.535856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.535878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.535906 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.535926 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.638435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.639354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.639587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.639801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.640066 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.743254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.743701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.743975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.744135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.744291 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.847399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.847623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.847706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.847848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.847938 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.950234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.950301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.950319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.950345 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:04 crc kubenswrapper[4736]: I0214 10:43:04.950365 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:04Z","lastTransitionTime":"2026-02-14T10:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.053372 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.053455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.053472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.053497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.053514 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.155731 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.155796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.155805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.155818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.155827 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.258524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.258579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.258592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.258611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.258626 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.360538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.360610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.360627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.360652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.360674 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.396190 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.396283 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.396259 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.396222 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:05 crc kubenswrapper[4736]: E0214 10:43:05.396418 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:05 crc kubenswrapper[4736]: E0214 10:43:05.396805 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:05 crc kubenswrapper[4736]: E0214 10:43:05.396612 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:05 crc kubenswrapper[4736]: E0214 10:43:05.396987 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.397023 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:21:13.442080193 +0000 UTC Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.463864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.463926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.463943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.463968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.463986 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.566386 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.566451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.566469 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.566496 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.566564 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.669908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.670008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.670027 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.670058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.670079 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.772834 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.772899 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.772921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.772950 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.772971 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.876255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.876311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.876328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.876351 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.876368 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.979015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.979078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.979106 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.979134 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:05 crc kubenswrapper[4736]: I0214 10:43:05.979155 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:05Z","lastTransitionTime":"2026-02-14T10:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.082540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.082614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.082637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.082665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.082686 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.186154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.186213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.186231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.186256 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.186275 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.289793 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.289861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.289879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.289907 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.289923 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.392635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.392694 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.392711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.392734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.392789 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.397385 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:21:11.89690222 +0000 UTC Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.499796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.499875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.499897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.499927 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.499961 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.603606 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.603722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.603824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.603861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.603883 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.707383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.707450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.707472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.707502 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.707542 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.811370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.811787 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.811988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.812237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.812452 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.916012 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.916284 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.916348 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.916422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:06 crc kubenswrapper[4736]: I0214 10:43:06.916491 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:06Z","lastTransitionTime":"2026-02-14T10:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.018530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.018620 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.018646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.018680 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.018703 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.121088 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.121140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.121157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.121182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.121199 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.224313 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.224368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.224385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.224447 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.224465 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.327502 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.327568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.327591 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.327623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.327644 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.396971 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.396993 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.397220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.397486 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.397538 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:47:44.189879347 +0000 UTC Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.397691 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.398168 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.398369 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.398289 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.430881 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.430944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.430967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.430996 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.431017 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.533178 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.533239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.533253 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.533270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.533282 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.635403 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.635696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.635781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.635856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.635923 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.738139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.738188 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.738208 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.738232 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.738249 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.841599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.841669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.841687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.841710 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.841727 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.861567 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.861639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.861664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.861693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.861714 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.884600 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.890722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.890836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.890887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.890913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.890934 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.914008 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.920185 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.920417 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.920556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.920706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.920963 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.941171 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.946336 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.946388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.946405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.946429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.946493 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:07 crc kubenswrapper[4736]: E0214 10:43:07.970151 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.976007 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.976133 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.976160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.976238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:07 crc kubenswrapper[4736]: I0214 10:43:07.976334 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:07Z","lastTransitionTime":"2026-02-14T10:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: E0214 10:43:07.999856 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:08 crc kubenswrapper[4736]: E0214 10:43:08.000010 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.002759 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.002789 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.002800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.002818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.002830 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.105570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.105613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.105629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.105654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.105676 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.208326 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.208380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.208392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.208410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.208421 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.311787 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.311840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.311857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.311883 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.311900 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.397943 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:37:32.425546494 +0000 UTC Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.414711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.414786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.414813 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.414839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.414860 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.518104 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.518426 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.518625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.518908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.519106 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.622373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.622429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.622445 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.622468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.622485 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.725965 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.726019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.726038 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.726060 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.726077 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.829104 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.829159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.829174 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.829200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.829216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.932084 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.932158 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.932193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.932221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:08 crc kubenswrapper[4736]: I0214 10:43:08.932240 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:08Z","lastTransitionTime":"2026-02-14T10:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.038486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.038557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.038574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.038599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.038615 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.141874 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.141931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.141947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.141970 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.141988 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.148581 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.148830 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.148927 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs podName:df467c01-3f4e-41c8-b5fa-b14831cfe827 nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.148903976 +0000 UTC m=+163.517531384 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs") pod "network-metrics-daemon-przcz" (UID: "df467c01-3f4e-41c8-b5fa-b14831cfe827") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.244790 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.245183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.245504 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.245825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.246129 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.350121 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.350588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.351081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.351537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.351988 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.396098 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.396219 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.396403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.396461 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.396737 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.397085 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.396125 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:09 crc kubenswrapper[4736]: E0214 10:43:09.397835 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.398805 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:33:36.307868998 +0000 UTC Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.455820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.456074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.456226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.456384 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.456523 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.559340 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.559658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.559837 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.560000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.560153 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.663657 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.664004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.664108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.664199 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.664285 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.767798 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.767877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.767897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.767924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.767941 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.870830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.870901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.870925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.870952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.870970 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.973875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.973926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.973944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.973967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:09 crc kubenswrapper[4736]: I0214 10:43:09.973983 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:09Z","lastTransitionTime":"2026-02-14T10:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.076231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.076308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.076343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.076373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.076397 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.179226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.179294 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.179314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.179341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.179363 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.282729 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.282845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.282869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.282897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.282919 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.386309 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.386383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.386401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.386431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.386450 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.399000 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:56:19.257385031 +0000 UTC Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.417629 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.438873 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.457125 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.483160 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.494612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.494665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.494683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.494706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.494725 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.514142 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.534839 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.554507 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.571813 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.592042 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.597146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.597180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.597195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.597214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.597227 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.610090 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.626171 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.647003 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.670979 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.686315 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.700346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.700390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.700401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.700418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.700430 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.710591 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.728081 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.740489 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.770184 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:43:01Z\\\",\\\"message\\\":\\\"-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 10:43:01.238454 6778 services_controller.go:451] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-migrator-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 10:43:01.238500 6778 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-8fm57 after 0 failed attempt(s)\\\\nI0214 10:43:01.238587 6778 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-8fm57\\\\nF0214 10:43:01.238101 6778 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.785114 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:10Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.803286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.803334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.803347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.803365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.803376 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.905896 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.905947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.905963 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.905984 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:10 crc kubenswrapper[4736]: I0214 10:43:10.905999 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:10Z","lastTransitionTime":"2026-02-14T10:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.009463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.009797 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.009947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.010095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.010284 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.113657 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.114178 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.114346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.114497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.114657 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.218238 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.218288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.218305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.218333 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.218351 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.321365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.321422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.321447 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.321476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.321497 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.396679 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.396783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:11 crc kubenswrapper[4736]: E0214 10:43:11.396870 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:11 crc kubenswrapper[4736]: E0214 10:43:11.396992 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.397095 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.397133 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:11 crc kubenswrapper[4736]: E0214 10:43:11.397218 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:11 crc kubenswrapper[4736]: E0214 10:43:11.397295 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.399553 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:51:37.990899535 +0000 UTC Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.424034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.424074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.424089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.424109 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.424126 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.527203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.527278 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.527302 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.527333 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.527356 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.630960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.631029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.631053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.631081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.631099 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.734363 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.734441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.734467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.734492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.734512 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.836538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.836600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.836620 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.836644 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.836676 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.939976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.940054 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.940090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.940119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:11 crc kubenswrapper[4736]: I0214 10:43:11.940140 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:11Z","lastTransitionTime":"2026-02-14T10:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.042734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.042808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.042819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.042836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.042848 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.158602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.158653 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.158667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.158692 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.158706 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.261245 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.261301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.261317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.261373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.261392 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.364440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.364503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.364525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.364554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.364616 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.400281 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:50:23.944370086 +0000 UTC Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.468080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.468544 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.468829 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.469074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.469293 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.573028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.573161 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.573253 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.573368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.573464 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.676950 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.677042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.677068 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.677102 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.677144 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.784180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.784269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.784287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.784320 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.784338 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.887393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.887435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.887451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.887474 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.887490 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.990547 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.990948 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.990966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.990987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:12 crc kubenswrapper[4736]: I0214 10:43:12.991003 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:12Z","lastTransitionTime":"2026-02-14T10:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.093833 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.093929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.093975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.094005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.094029 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.197523 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.197602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.197628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.197658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.197680 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.304940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.305004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.305026 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.305130 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.305206 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.397081 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.397163 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.397265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:13 crc kubenswrapper[4736]: E0214 10:43:13.397572 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.397611 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:13 crc kubenswrapper[4736]: E0214 10:43:13.397794 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:13 crc kubenswrapper[4736]: E0214 10:43:13.397967 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:13 crc kubenswrapper[4736]: E0214 10:43:13.398108 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.401829 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:59:59.160457446 +0000 UTC Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.408312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.408367 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.408384 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.408409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.408429 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.511680 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.511792 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.511813 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.511836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.511853 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.614839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.614957 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.614977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.615000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.615016 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.718391 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.718457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.718476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.718505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.718524 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.821963 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.822025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.822042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.822066 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.822084 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.925332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.925404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.925428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.925456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:13 crc kubenswrapper[4736]: I0214 10:43:13.925478 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:13Z","lastTransitionTime":"2026-02-14T10:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.028467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.028510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.028524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.028543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.028557 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.131125 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.131194 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.131212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.131236 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.131253 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.234335 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.234401 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.234419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.234449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.234473 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.338053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.338191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.338207 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.338231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.338248 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.397948 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:14 crc kubenswrapper[4736]: E0214 10:43:14.398242 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.402092 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:39:07.628570603 +0000 UTC Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.440706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.440788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.440806 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.440828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.440844 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.543236 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.543286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.543303 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.543324 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.543342 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.646852 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.646926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.646948 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.646974 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.646996 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.750050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.750111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.750129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.750154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.750170 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.852531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.852620 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.852642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.852674 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.852692 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.955612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.955663 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.955675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.955695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:14 crc kubenswrapper[4736]: I0214 10:43:14.955710 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:14Z","lastTransitionTime":"2026-02-14T10:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.058543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.058625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.058648 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.058675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.058734 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.161658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.161717 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.161734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.161777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.161788 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.264986 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.265058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.265080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.265102 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.265119 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.368535 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.368595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.368612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.368637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.368654 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.396812 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.396882 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.396881 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.396819 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:15 crc kubenswrapper[4736]: E0214 10:43:15.397002 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:15 crc kubenswrapper[4736]: E0214 10:43:15.397107 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:15 crc kubenswrapper[4736]: E0214 10:43:15.397264 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:15 crc kubenswrapper[4736]: E0214 10:43:15.397325 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.402622 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:49:08.813878918 +0000 UTC Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.471179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.471613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.472558 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.472893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.473095 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.575867 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.576219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.576605 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.576864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.577079 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.680588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.680661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.680683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.680712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.680733 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.783604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.783962 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.784140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.784285 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.784482 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.887538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.887599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.887616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.887638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.887658 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.990472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.990708 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.990794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.990864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:15 crc kubenswrapper[4736]: I0214 10:43:15.990928 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:15Z","lastTransitionTime":"2026-02-14T10:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.093820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.093977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.094004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.094032 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.094056 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.197652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.197933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.197968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.197995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.198013 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.301807 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.301879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.301902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.301933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.301959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.402732 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:18:06.050336949 +0000 UTC Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.404842 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.404898 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.404918 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.404945 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.404968 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.508095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.508457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.508589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.508711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.508917 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.611619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.612168 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.612442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.612671 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.612962 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.716350 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.716430 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.716440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.716455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.716467 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.818570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.818595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.818602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.818614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.818623 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.920139 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.920175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.920185 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.920201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:16 crc kubenswrapper[4736]: I0214 10:43:16.920212 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:16Z","lastTransitionTime":"2026-02-14T10:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.023539 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.023604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.023620 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.023640 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.023656 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.125931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.125994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.126016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.126045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.126067 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.228613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.228665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.228687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.228714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.228735 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.332138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.332194 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.332212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.332241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.332258 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.396625 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.397115 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.397234 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:17 crc kubenswrapper[4736]: E0214 10:43:17.397348 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.397502 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:17 crc kubenswrapper[4736]: E0214 10:43:17.397614 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:17 crc kubenswrapper[4736]: E0214 10:43:17.397946 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:17 crc kubenswrapper[4736]: E0214 10:43:17.398053 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.403954 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:10:07.101384177 +0000 UTC Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.435511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.435572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.435596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.435617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.435635 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.538507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.538609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.538633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.538662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.538683 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.641182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.641243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.641264 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.641291 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.641312 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.744758 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.744820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.744831 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.744850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.744865 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.847708 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.847755 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.847766 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.847780 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.847789 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.950577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.950618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.950629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.950644 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:17 crc kubenswrapper[4736]: I0214 10:43:17.950657 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:17Z","lastTransitionTime":"2026-02-14T10:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.028353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.028801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.028875 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.028998 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.029119 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.042941 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.047287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.047335 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.047360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.047379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.047390 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.069304 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.078509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.078551 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.078584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.078603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.078615 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.091014 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.094115 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.094172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.094184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.094198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.094210 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.109161 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.113101 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.113129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.113164 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.113179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.113191 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.132793 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148056Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608856Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T10:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eaba9d57-0133-42a1-b586-0a2596194ba8\\\",\\\"systemUUID\\\":\\\"cd5bc215-ecb6-489e-b52e-104c9081339f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:18 crc kubenswrapper[4736]: E0214 10:43:18.132919 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.134848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.134901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.134919 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.134941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.134960 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.237723 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.237871 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.238293 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.238482 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.238515 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.342338 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.342421 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.342445 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.342479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.342503 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.404049 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:37:19.342656168 +0000 UTC Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.446234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.446289 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.446306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.446328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.446345 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.549587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.549654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.549675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.549706 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.549727 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.652100 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.652168 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.652192 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.652262 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.652292 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.755369 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.755424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.755442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.755465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.755482 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.857899 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.857959 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.858103 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.858135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.858157 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.961072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.961157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.961179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.961215 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:18 crc kubenswrapper[4736]: I0214 10:43:18.961237 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:18Z","lastTransitionTime":"2026-02-14T10:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.064165 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.064204 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.064214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.064231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.064245 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.167446 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.167500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.167521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.167550 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.167569 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.270911 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.270977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.270995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.271019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.271036 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.379124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.379210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.379277 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.379315 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.379341 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.396299 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.396368 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:19 crc kubenswrapper[4736]: E0214 10:43:19.396499 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.396512 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.396542 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:19 crc kubenswrapper[4736]: E0214 10:43:19.396671 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:19 crc kubenswrapper[4736]: E0214 10:43:19.396888 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:19 crc kubenswrapper[4736]: E0214 10:43:19.396967 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.404793 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:42:03.917482675 +0000 UTC Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.481181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.481221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.481228 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.481241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.481250 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.583410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.583456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.583471 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.583492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.583507 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.686449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.686495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.686509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.686529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.686545 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.789890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.789992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.790018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.790048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.790070 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.892608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.892691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.892716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.892777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.892803 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.994982 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.995087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.995105 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.995132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:19 crc kubenswrapper[4736]: I0214 10:43:19.995151 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:19Z","lastTransitionTime":"2026-02-14T10:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.098530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.098586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.098608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.098639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.098660 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.201924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.201982 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.201999 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.202025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.202044 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.305477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.305557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.305581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.305608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.305628 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.405029 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:13:00.047709286 +0000 UTC Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.408266 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.408317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.408337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.408359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.408375 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.423629 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb2b116-efd4-4f64-be6c-5cc5a0655589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01d4b6e46510671b32b4bec854140fc1575bb4f2563d8a02066f40e9b3db741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2b1b66bdba76b9ab441356c42dac25ec137e7fb6cb600257958ec1d7097032a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804925f35f49955681d86a1d67a01ee21bb2bcb63e773f18ce2e531b4292b65b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66955e91bb90ba2ed2abe19833386653438c37e7efe6f6f0f548a0adba14b7d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ee5992739bb7b110d0ac81e78524345f9bb55c3bb80b9ff12f7bb645452340\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3bc00abac333d1310759001d67bd201aafdeaa1fa5e8b5e9505677653b3b5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f9191d31def8a3c94b8cdaf83a9b33ace4ccb5c8ef5985810b639819a19d586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqhxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w6fw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.439152 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-przcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df467c01-3f4e-41c8-b5fa-b14831cfe827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kkdjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-przcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.456784 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2e3f028-461a-48ef-97b6-77ac14e74487\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T10:41:49Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771065694\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771065694\\\\\\\\\\\\\\\" (2026-02-14 09:41:34 +0000 UTC to 2027-02-14 09:41:34 +0000 UTC (now=2026-02-14 10:41:49.686804427 +0000 UTC))\\\\\\\"\\\\nI0214 10:41:49.686844 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0214 10:41:49.686925 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0214 10:41:49.686961 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2176232732/tls.crt::/tmp/serving-cert-2176232732/tls.key\\\\\\\"\\\\nI0214 10:41:49.687057 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0214 10:41:49.687093 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0214 10:41:49.700352 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0214 10:41:49.689040 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700404 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0214 10:41:49.700502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0214 10:41:49.700517 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0214 10:41:49.689023 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0214 10:41:49.700987 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nF0214 10:41:49.700961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.469069 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.481697 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8fm57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17edb3a-04a8-4c2d-8216-43dd45a1bf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22501898e651af7dbe2876563201618e9c028813ee90c5f193eaf3cfd3d3747a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t88lg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8fm57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.504203 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zm7d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7224ab-d0ab-49e3-9154-4d9047057681\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:42:39Z\\\",\\\"message\\\":\\\"2026-02-14T10:41:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc\\\\n2026-02-14T10:41:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f601d1dc-9160-4abc-829b-5109fcec40bc to /host/opt/cni/bin/\\\\n2026-02-14T10:41:54Z [verbose] multus-daemon started\\\\n2026-02-14T10:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-14T10:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zm7d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.510340 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.510371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.510383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.510399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.510410 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.515526 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04011cfa-0fe1-47af-b7bc-a9895caff97f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e3eccfe206fe28eb31ea9f2c2865c14e7a814ac2b21b9e1bd39d60772b66cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db35e8d4f12bd46c329d83f9df4a57050ec639f8f0a809eef25ca39b9e2db56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftszz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:42:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4qqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.544700 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17170d49-21e4-435b-958d-296ef569b257\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec7e8c55f6897170d7f783878f5b8b6d12aaf722ae46c3f8a177d4f0c07f315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae64cea4160181ed55a7f911e43d2d31612539c89bfea3e69a1e3e4ca4391cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4680fae82172f4b358c90256396652936d0f19d58b8dc4b46e083b0cb7264d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aadf2cc2369358f1412412c0a1e0a8862efbd605ff1ba3bd78edbb7f2605466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58c913f1952aa719d95d83d719784cdb650d83ac5bf6721e7a3c9bd24bd2b593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee9f230a8ff094369857d862116aa47a58b6aee75bf1c956d52a8baa9afcee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd66df68abf11c046156ba2652753d52fcfaa71761707090871334b07f506f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c5dfa7bee84e25866ea481afb2f507593d22ae6250f6f7432234b581f2eb69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.561371 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://850601e143796826807ff3555eb3e5f28c101ee790b294e956367708478c65d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.580146 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.612474 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.612507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.612518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.612533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.612546 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.625094 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4586e477-2198-4f75-aeba-0eaf894cde1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T10:43:01Z\\\",\\\"message\\\":\\\"-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 10:43:01.238454 6778 services_controller.go:451] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-migrator-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 10:43:01.238500 6778 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-8fm57 after 0 failed attempt(s)\\\\nI0214 10:43:01.238587 6778 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-8fm57\\\\nF0214 10:43:01.238101 6778 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T10:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hb2mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7vfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.640383 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9567027b-35b1-4f78-a392-017135aa62eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a1c3510232c5c3c0f980900e9e7e573618569b153716ad22b9c28a46d632f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38974ec786f343b258e511fe43c55cb89d10a7a462c74b1538ebb822d3f61665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.658800 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40a6ba271d9f69d96477f5d01669c29f4dd0da8f96ee6b035e9da082a4a49401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e0f4f316af68f9dc2e47eeb061936ebf57c059548ff6cc82a6a375ddf88bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.670572 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22bfc94a-170b-47f5-bc6b-c6e77720371d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://492be524b70cc87117ba13944141fb9ceee08ef3faed01a2c194faca854b7684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjt6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2bpbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.679692 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jdrpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1eac55-e1d7-4aaf-83a8-786d84e7a8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78604bbabedd10e061e0d4faac71f13b2376d0bf2e71d15912d6da21b34ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2jql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jdrpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.690737 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70bb30f5-1354-4f18-acde-ac6e45917bff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab061b79548c3f51f96bd927c93cddea7ae8c862750a8e21d816189a5462aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8103786f3474e85e5967de52988544c3c2a52deca69e543a2d53958e0dc3102c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa10a182900c28189df2f1a8373d9808a75c6b786806ccbecfd397587a516c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.702395 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceeda76d-4578-4d3e-b6c2-ba1d959ab606\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbc4ecd75ec201c4ac478f5b17755f096038ddc88f997df8932aeeccce42c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8c8867d1d05d4caf4e2f4318cf60a1a6a2c32afc0fbbf5fab3d20b6750f09f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb09db29ccf20ad93b4a4b598b1e1f4d11a94de878f7e39b87a4bf0e26f44595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17093089efb0bae74a69b90bb81a46ed78615ec7b0d4feedbe94c69cd6cccb48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T10:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T10:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T10:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.714604 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f019c14c248ae761036f71350d6f7a9ea3095e25fd637f3ba821c5cd32587616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T10:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.715053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.715085 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.715097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.715116 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.715128 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.727057 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T10:41:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T10:43:20Z is after 2025-08-24T17:21:41Z" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.817458 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.817537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.817590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.817620 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.817633 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.920242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.920314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.920339 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.920370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:20 crc kubenswrapper[4736]: I0214 10:43:20.920393 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:20Z","lastTransitionTime":"2026-02-14T10:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.022995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.023074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.023096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.023123 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.023146 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.126826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.126891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.126911 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.126941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.126961 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.230052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.230114 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.230135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.230163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.230183 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.333423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.333484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.333509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.333538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.333561 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.396219 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:21 crc kubenswrapper[4736]: E0214 10:43:21.396615 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.396287 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.396324 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:21 crc kubenswrapper[4736]: E0214 10:43:21.397028 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:21 crc kubenswrapper[4736]: E0214 10:43:21.396902 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.396220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:21 crc kubenswrapper[4736]: E0214 10:43:21.397225 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.405677 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:09:41.080813593 +0000 UTC Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.436488 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.436545 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.436568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.436594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.436614 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.539526 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.539590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.539614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.539642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.539665 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.643030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.643109 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.643132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.643160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.643191 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.745916 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.745965 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.745980 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.746002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.746014 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.849366 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.849594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.849686 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.849800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.849867 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.951913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.952021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.952045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.952070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:21 crc kubenswrapper[4736]: I0214 10:43:21.952086 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:21Z","lastTransitionTime":"2026-02-14T10:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.055032 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.055108 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.055131 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.055160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.055181 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.159189 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.159239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.159249 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.159265 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.159275 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.261579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.261614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.261622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.261637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.261646 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.364546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.364602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.364619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.364663 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.364681 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.406022 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:06:01.135771707 +0000 UTC Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.467784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.467822 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.467832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.467848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.467858 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.569555 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.569589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.569597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.569610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.569619 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.671996 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.672060 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.672083 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.672111 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.672147 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.774877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.774942 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.774964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.774995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.775019 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.877259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.877300 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.877315 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.877335 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.877349 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.979935 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.980004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.980028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.980054 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:22 crc kubenswrapper[4736]: I0214 10:43:22.980074 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:22Z","lastTransitionTime":"2026-02-14T10:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.083347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.083387 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.083400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.083419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.083432 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.195590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.195626 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.195638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.195652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.195662 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.298703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.298818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.298845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.298872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.298889 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.396326 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.396375 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:23 crc kubenswrapper[4736]: E0214 10:43:23.396464 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:23 crc kubenswrapper[4736]: E0214 10:43:23.396576 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.396675 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:23 crc kubenswrapper[4736]: E0214 10:43:23.396814 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.396858 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:23 crc kubenswrapper[4736]: E0214 10:43:23.396951 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.401845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.401883 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.401893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.401910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.401923 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.406538 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:51:13.278377065 +0000 UTC Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.504513 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.504571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.504592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.504625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.504649 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.607529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.607892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.608006 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.608072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.608153 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.711119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.711154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.711164 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.711179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.711190 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.814562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.814607 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.814621 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.814638 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.814650 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.917869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.917941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.917964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.917994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:23 crc kubenswrapper[4736]: I0214 10:43:23.918016 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:23Z","lastTransitionTime":"2026-02-14T10:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.020670 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.020703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.020714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.020730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.020764 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.123702 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.123783 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.123800 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.123825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.123842 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.230250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.230307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.230325 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.230354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.230372 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.333956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.334011 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.334027 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.334050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.334067 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.407144 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:02:52.517541645 +0000 UTC Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.437429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.437503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.437526 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.437558 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.437581 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.540391 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.540436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.540450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.540468 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.540482 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.643120 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.643179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.643195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.643222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.643249 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.746699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.746792 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.746816 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.746840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.746858 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.849317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.849390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.849410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.849434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.849450 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.952514 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.952570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.952609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.952639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:24 crc kubenswrapper[4736]: I0214 10:43:24.952660 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:24Z","lastTransitionTime":"2026-02-14T10:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.056877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.056956 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.056981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.057011 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.057036 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.159329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.159377 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.159412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.159426 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.159434 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.262036 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.262097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.262122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.262151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.262215 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.365181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.365272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.365314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.365344 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.365365 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.396783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.396846 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.396783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:25 crc kubenswrapper[4736]: E0214 10:43:25.396921 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.396943 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:25 crc kubenswrapper[4736]: E0214 10:43:25.397104 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:25 crc kubenswrapper[4736]: E0214 10:43:25.397293 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:25 crc kubenswrapper[4736]: E0214 10:43:25.397413 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.408253 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:37:47.611524041 +0000 UTC Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.468172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.468242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.468265 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.468323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.468346 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.570377 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.570412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.570422 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.570435 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.570444 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.673689 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.674070 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.674187 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.674323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.674465 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.777380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.777453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.777470 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.777516 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.777531 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.880045 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.880122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.880135 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.880152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.880165 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.983029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.983090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.983105 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.983126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:25 crc kubenswrapper[4736]: I0214 10:43:25.983140 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:25Z","lastTransitionTime":"2026-02-14T10:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.085878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.085927 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.085939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.085966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.085983 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.188696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.188796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.188823 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.188853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.188878 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.291740 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.291842 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.291864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.291893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.291915 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.394986 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.396110 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.396160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.396195 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.396216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.409480 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:20:32.97100888 +0000 UTC Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.499513 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.499570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.499586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.499610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.499628 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.603029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.603081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.603097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.603120 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.603141 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.705818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.705861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.705870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.705884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.705894 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.809832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.809886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.809902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.809924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.809939 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.911899 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.911930 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.911938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.911949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:26 crc kubenswrapper[4736]: I0214 10:43:26.911957 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:26Z","lastTransitionTime":"2026-02-14T10:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.014171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.014205 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.014214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.014243 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.014254 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.025938 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/1.log" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.026417 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/0.log" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.026457 4736 generic.go:334] "Generic (PLEG): container finished" podID="db7224ab-d0ab-49e3-9154-4d9047057681" containerID="8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a" exitCode=1 Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.026484 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerDied","Data":"8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.026516 4736 scope.go:117] "RemoveContainer" containerID="e54391f89eaed208eabec49f60f01fbb9d6380294919dcca11580fc7622670f1" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.026949 4736 scope.go:117] "RemoveContainer" containerID="8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a" Feb 14 10:43:27 crc kubenswrapper[4736]: E0214 10:43:27.027125 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zm7d8_openshift-multus(db7224ab-d0ab-49e3-9154-4d9047057681)\"" pod="openshift-multus/multus-zm7d8" podUID="db7224ab-d0ab-49e3-9154-4d9047057681" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.050613 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podStartSLOduration=97.050596925 podStartE2EDuration="1m37.050596925s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.050512343 +0000 UTC m=+117.419139721" watchObservedRunningTime="2026-02-14 10:43:27.050596925 +0000 UTC m=+117.419224293" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.064964 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=49.064947239 podStartE2EDuration="49.064947239s" podCreationTimestamp="2026-02-14 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.063485479 +0000 UTC m=+117.432112877" watchObservedRunningTime="2026-02-14 10:43:27.064947239 +0000 UTC m=+117.433574597" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.116703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.117023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.117100 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.117170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.117233 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.133025 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jdrpk" podStartSLOduration=97.133005868 podStartE2EDuration="1m37.133005868s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.119166228 +0000 UTC m=+117.487793606" watchObservedRunningTime="2026-02-14 10:43:27.133005868 +0000 UTC m=+117.501633246" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.133116 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=94.133112251 podStartE2EDuration="1m34.133112251s" podCreationTimestamp="2026-02-14 10:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.132871454 +0000 UTC m=+117.501498822" watchObservedRunningTime="2026-02-14 10:43:27.133112251 +0000 UTC m=+117.501739629" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.156470 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=68.156448722 podStartE2EDuration="1m8.156448722s" podCreationTimestamp="2026-02-14 10:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.144918895 +0000 UTC m=+117.513546263" watchObservedRunningTime="2026-02-14 10:43:27.156448722 +0000 UTC m=+117.525076090" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.173491 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8fm57" podStartSLOduration=97.173472009 podStartE2EDuration="1m37.173472009s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.156908824 +0000 UTC m=+117.525536202" watchObservedRunningTime="2026-02-14 10:43:27.173472009 +0000 UTC m=+117.542099377" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.190947 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-w6fw9" podStartSLOduration=97.190927438 podStartE2EDuration="1m37.190927438s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.190407754 +0000 UTC m=+117.559035122" watchObservedRunningTime="2026-02-14 10:43:27.190927438 +0000 UTC m=+117.559554806" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.218813 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.218792014 podStartE2EDuration="1m36.218792014s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.218780953 +0000 UTC m=+117.587408341" watchObservedRunningTime="2026-02-14 10:43:27.218792014 +0000 UTC m=+117.587419392" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.219510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.219554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.219564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.219578 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.219590 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.289356 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4qqc" podStartSLOduration=96.289339291 podStartE2EDuration="1m36.289339291s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.288603521 +0000 UTC m=+117.657230889" watchObservedRunningTime="2026-02-14 10:43:27.289339291 +0000 UTC m=+117.657966659" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.310411 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=96.310393559 podStartE2EDuration="1m36.310393559s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:27.308865627 +0000 UTC m=+117.677493015" watchObservedRunningTime="2026-02-14 10:43:27.310393559 +0000 UTC m=+117.679020927" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.321560 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.321589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.321598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.321611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.321620 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.396938 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.397011 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:27 crc kubenswrapper[4736]: E0214 10:43:27.397076 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:27 crc kubenswrapper[4736]: E0214 10:43:27.397175 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.397425 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:27 crc kubenswrapper[4736]: E0214 10:43:27.397506 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.396954 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:27 crc kubenswrapper[4736]: E0214 10:43:27.397936 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.410295 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:17:07.298021366 +0000 UTC Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.424025 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.424275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.424488 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.424698 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.424881 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.527584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.527613 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.527621 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.527634 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.527642 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.630473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.630925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.631099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.631281 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.631444 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.734439 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.734757 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.734870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.734968 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.735064 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.838227 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.838602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.839003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.839338 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.839705 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.943091 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.943507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.943650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.943843 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:27 crc kubenswrapper[4736]: I0214 10:43:27.944074 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:27Z","lastTransitionTime":"2026-02-14T10:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.031271 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/1.log" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.046928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.046988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.047005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.047028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.047047 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.149611 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.149640 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.149648 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.149661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.149669 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.252533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.252563 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.252574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.252588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.252600 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.355649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.355704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.355763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.355794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.355811 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.411250 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:56:54.032370327 +0000 UTC Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.458245 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.458522 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.458598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.458667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.458734 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.498113 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.498358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.498452 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.498541 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.498620 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T10:43:28Z","lastTransitionTime":"2026-02-14T10:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.538365 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn"] Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.539049 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.543554 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.544056 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.544231 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.545558 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.672263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d9e78f-db75-4863-80f5-f1e3336330cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.672314 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60d9e78f-db75-4863-80f5-f1e3336330cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.672337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.672360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.672380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d9e78f-db75-4863-80f5-f1e3336330cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.772830 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60d9e78f-db75-4863-80f5-f1e3336330cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773135 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773323 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773484 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d9e78f-db75-4863-80f5-f1e3336330cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773892 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d9e78f-db75-4863-80f5-f1e3336330cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773276 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773988 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60d9e78f-db75-4863-80f5-f1e3336330cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.773444 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60d9e78f-db75-4863-80f5-f1e3336330cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.780151 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d9e78f-db75-4863-80f5-f1e3336330cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.805125 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d9e78f-db75-4863-80f5-f1e3336330cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bkrnn\" (UID: \"60d9e78f-db75-4863-80f5-f1e3336330cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:28 crc kubenswrapper[4736]: I0214 10:43:28.852245 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.036878 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" event={"ID":"60d9e78f-db75-4863-80f5-f1e3336330cf","Type":"ContainerStarted","Data":"4587f8fdda3414b9e42a354b536f56ce4be51a1804929e9084d8c15a043264e3"} Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.396498 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.396620 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:29 crc kubenswrapper[4736]: E0214 10:43:29.396878 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.396952 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.396963 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:29 crc kubenswrapper[4736]: E0214 10:43:29.397297 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:29 crc kubenswrapper[4736]: E0214 10:43:29.397428 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:29 crc kubenswrapper[4736]: E0214 10:43:29.397550 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.397680 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:29 crc kubenswrapper[4736]: E0214 10:43:29.397912 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.413079 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:31:45.423492491 +0000 UTC Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.413144 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 14 10:43:29 crc kubenswrapper[4736]: I0214 10:43:29.425533 4736 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 10:43:30 crc kubenswrapper[4736]: I0214 10:43:30.042603 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" event={"ID":"60d9e78f-db75-4863-80f5-f1e3336330cf","Type":"ContainerStarted","Data":"ef259d853c641a4c90041324233e67a662a385a24a44c2c98118c632464d1be0"} Feb 14 10:43:30 crc kubenswrapper[4736]: I0214 10:43:30.069412 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bkrnn" podStartSLOduration=100.069381557 podStartE2EDuration="1m40.069381557s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:30.067475115 +0000 UTC m=+120.436102613" watchObservedRunningTime="2026-02-14 10:43:30.069381557 +0000 UTC m=+120.438008975" Feb 14 10:43:30 crc kubenswrapper[4736]: E0214 10:43:30.318407 4736 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 14 10:43:30 crc kubenswrapper[4736]: E0214 10:43:30.500526 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:31 crc kubenswrapper[4736]: I0214 10:43:31.397081 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:31 crc kubenswrapper[4736]: I0214 10:43:31.397127 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:31 crc kubenswrapper[4736]: I0214 10:43:31.397138 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:31 crc kubenswrapper[4736]: I0214 10:43:31.397176 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:31 crc kubenswrapper[4736]: E0214 10:43:31.397257 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:31 crc kubenswrapper[4736]: E0214 10:43:31.397509 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:31 crc kubenswrapper[4736]: E0214 10:43:31.397709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:31 crc kubenswrapper[4736]: E0214 10:43:31.398253 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:33 crc kubenswrapper[4736]: I0214 10:43:33.396550 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:33 crc kubenswrapper[4736]: I0214 10:43:33.396633 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:33 crc kubenswrapper[4736]: I0214 10:43:33.396587 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:33 crc kubenswrapper[4736]: E0214 10:43:33.396726 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:33 crc kubenswrapper[4736]: I0214 10:43:33.396800 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:33 crc kubenswrapper[4736]: E0214 10:43:33.396892 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:33 crc kubenswrapper[4736]: E0214 10:43:33.396968 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:33 crc kubenswrapper[4736]: E0214 10:43:33.397054 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:35 crc kubenswrapper[4736]: I0214 10:43:35.397045 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:35 crc kubenswrapper[4736]: I0214 10:43:35.397132 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:35 crc kubenswrapper[4736]: I0214 10:43:35.397187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:35 crc kubenswrapper[4736]: I0214 10:43:35.397225 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:35 crc kubenswrapper[4736]: E0214 10:43:35.397489 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:35 crc kubenswrapper[4736]: E0214 10:43:35.397589 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:35 crc kubenswrapper[4736]: E0214 10:43:35.397353 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:35 crc kubenswrapper[4736]: E0214 10:43:35.397775 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:35 crc kubenswrapper[4736]: E0214 10:43:35.501852 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:37 crc kubenswrapper[4736]: I0214 10:43:37.396386 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:37 crc kubenswrapper[4736]: I0214 10:43:37.396495 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:37 crc kubenswrapper[4736]: E0214 10:43:37.396569 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:37 crc kubenswrapper[4736]: I0214 10:43:37.396495 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:37 crc kubenswrapper[4736]: I0214 10:43:37.396732 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:37 crc kubenswrapper[4736]: E0214 10:43:37.396787 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:37 crc kubenswrapper[4736]: E0214 10:43:37.396888 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:37 crc kubenswrapper[4736]: E0214 10:43:37.397011 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:39 crc kubenswrapper[4736]: I0214 10:43:39.397041 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:39 crc kubenswrapper[4736]: E0214 10:43:39.397620 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:39 crc kubenswrapper[4736]: I0214 10:43:39.397066 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:39 crc kubenswrapper[4736]: E0214 10:43:39.397784 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:39 crc kubenswrapper[4736]: I0214 10:43:39.397105 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:39 crc kubenswrapper[4736]: I0214 10:43:39.397046 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:39 crc kubenswrapper[4736]: E0214 10:43:39.397954 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:39 crc kubenswrapper[4736]: E0214 10:43:39.398150 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:40 crc kubenswrapper[4736]: I0214 10:43:40.399054 4736 scope.go:117] "RemoveContainer" containerID="8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a" Feb 14 10:43:40 crc kubenswrapper[4736]: I0214 10:43:40.399918 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:40 crc kubenswrapper[4736]: E0214 10:43:40.400142 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7vfr_openshift-ovn-kubernetes(4586e477-2198-4f75-aeba-0eaf894cde1a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" Feb 14 10:43:40 crc kubenswrapper[4736]: E0214 10:43:40.502151 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.076932 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/1.log" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.076988 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerStarted","Data":"a9a51d42096cd417ea48f3ae1a8ec91320986b90813f073e061032c9ca97040f"} Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.105990 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zm7d8" podStartSLOduration=111.105974479 podStartE2EDuration="1m51.105974479s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:41.105859086 +0000 UTC m=+131.474486544" watchObservedRunningTime="2026-02-14 10:43:41.105974479 +0000 UTC m=+131.474601847" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.396995 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.397005 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.397024 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:41 crc kubenswrapper[4736]: I0214 10:43:41.397033 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:41 crc kubenswrapper[4736]: E0214 10:43:41.397238 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:41 crc kubenswrapper[4736]: E0214 10:43:41.397362 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:41 crc kubenswrapper[4736]: E0214 10:43:41.397526 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:41 crc kubenswrapper[4736]: E0214 10:43:41.397616 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:43 crc kubenswrapper[4736]: I0214 10:43:43.396186 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:43 crc kubenswrapper[4736]: I0214 10:43:43.396226 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:43 crc kubenswrapper[4736]: E0214 10:43:43.396790 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:43 crc kubenswrapper[4736]: I0214 10:43:43.396269 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:43 crc kubenswrapper[4736]: I0214 10:43:43.396255 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:43 crc kubenswrapper[4736]: E0214 10:43:43.396939 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:43 crc kubenswrapper[4736]: E0214 10:43:43.397037 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:43 crc kubenswrapper[4736]: E0214 10:43:43.397191 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:45 crc kubenswrapper[4736]: I0214 10:43:45.396972 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:45 crc kubenswrapper[4736]: I0214 10:43:45.397074 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:45 crc kubenswrapper[4736]: I0214 10:43:45.396977 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:45 crc kubenswrapper[4736]: E0214 10:43:45.397191 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:45 crc kubenswrapper[4736]: E0214 10:43:45.397340 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:45 crc kubenswrapper[4736]: I0214 10:43:45.397152 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:45 crc kubenswrapper[4736]: E0214 10:43:45.397587 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:45 crc kubenswrapper[4736]: E0214 10:43:45.397833 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:45 crc kubenswrapper[4736]: E0214 10:43:45.503777 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:47 crc kubenswrapper[4736]: I0214 10:43:47.397059 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:47 crc kubenswrapper[4736]: I0214 10:43:47.397138 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:47 crc kubenswrapper[4736]: I0214 10:43:47.397063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:47 crc kubenswrapper[4736]: E0214 10:43:47.397303 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:47 crc kubenswrapper[4736]: I0214 10:43:47.397339 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:47 crc kubenswrapper[4736]: E0214 10:43:47.397468 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:47 crc kubenswrapper[4736]: E0214 10:43:47.397607 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:47 crc kubenswrapper[4736]: E0214 10:43:47.397789 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:49 crc kubenswrapper[4736]: I0214 10:43:49.396871 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:49 crc kubenswrapper[4736]: I0214 10:43:49.396939 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:49 crc kubenswrapper[4736]: I0214 10:43:49.396899 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:49 crc kubenswrapper[4736]: I0214 10:43:49.396891 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:49 crc kubenswrapper[4736]: E0214 10:43:49.397027 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:49 crc kubenswrapper[4736]: E0214 10:43:49.397180 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:49 crc kubenswrapper[4736]: E0214 10:43:49.397281 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:49 crc kubenswrapper[4736]: E0214 10:43:49.397366 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:50 crc kubenswrapper[4736]: E0214 10:43:50.504417 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:51 crc kubenswrapper[4736]: I0214 10:43:51.397107 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:51 crc kubenswrapper[4736]: E0214 10:43:51.397305 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:51 crc kubenswrapper[4736]: I0214 10:43:51.397724 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:51 crc kubenswrapper[4736]: E0214 10:43:51.397911 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:51 crc kubenswrapper[4736]: I0214 10:43:51.397999 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:51 crc kubenswrapper[4736]: I0214 10:43:51.398005 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:51 crc kubenswrapper[4736]: E0214 10:43:51.398129 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:51 crc kubenswrapper[4736]: E0214 10:43:51.398233 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:53 crc kubenswrapper[4736]: I0214 10:43:53.396793 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:53 crc kubenswrapper[4736]: I0214 10:43:53.396883 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:53 crc kubenswrapper[4736]: E0214 10:43:53.397013 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:53 crc kubenswrapper[4736]: I0214 10:43:53.397049 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:53 crc kubenswrapper[4736]: I0214 10:43:53.396825 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:53 crc kubenswrapper[4736]: E0214 10:43:53.397911 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:53 crc kubenswrapper[4736]: E0214 10:43:53.398051 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:53 crc kubenswrapper[4736]: E0214 10:43:53.398167 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:53 crc kubenswrapper[4736]: I0214 10:43:53.398413 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.128871 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.132248 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerStarted","Data":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.133113 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.158233 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podStartSLOduration=124.158219332 podStartE2EDuration="2m4.158219332s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:43:54.157355818 +0000 UTC m=+144.525983196" watchObservedRunningTime="2026-02-14 10:43:54.158219332 +0000 UTC m=+144.526846700" Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.252322 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-przcz"] Feb 14 10:43:54 crc kubenswrapper[4736]: I0214 10:43:54.252465 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:54 crc kubenswrapper[4736]: E0214 10:43:54.252610 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:55 crc kubenswrapper[4736]: I0214 10:43:55.396647 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:55 crc kubenswrapper[4736]: I0214 10:43:55.396704 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:55 crc kubenswrapper[4736]: E0214 10:43:55.396855 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:55 crc kubenswrapper[4736]: E0214 10:43:55.397042 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:55 crc kubenswrapper[4736]: I0214 10:43:55.397209 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:55 crc kubenswrapper[4736]: E0214 10:43:55.397317 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:55 crc kubenswrapper[4736]: E0214 10:43:55.506164 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 10:43:56 crc kubenswrapper[4736]: I0214 10:43:56.397344 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:56 crc kubenswrapper[4736]: E0214 10:43:56.397729 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:57 crc kubenswrapper[4736]: I0214 10:43:57.396464 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:57 crc kubenswrapper[4736]: I0214 10:43:57.396502 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:57 crc kubenswrapper[4736]: I0214 10:43:57.396584 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:57 crc kubenswrapper[4736]: E0214 10:43:57.396992 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:57 crc kubenswrapper[4736]: E0214 10:43:57.397292 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:57 crc kubenswrapper[4736]: E0214 10:43:57.397457 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:43:58 crc kubenswrapper[4736]: I0214 10:43:58.397085 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:43:58 crc kubenswrapper[4736]: E0214 10:43:58.397300 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.249690 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.249904 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:46:01.249877112 +0000 UTC m=+271.618504510 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.351305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.351409 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.351495 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351531 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.351557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351584 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351611 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351681 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351705 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 10:46:01.351671807 +0000 UTC m=+271.720299225 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351529 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351733 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351811 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:46:01.35178362 +0000 UTC m=+271.720411028 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351835 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351859 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351865 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 10:46:01.351831362 +0000 UTC m=+271.720458780 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.351998 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 10:46:01.351911704 +0000 UTC m=+271.720539122 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.396523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.396532 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.396701 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 10:43:59 crc kubenswrapper[4736]: I0214 10:43:59.396544 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.396771 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:43:59 crc kubenswrapper[4736]: E0214 10:43:59.396926 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 10:44:00 crc kubenswrapper[4736]: I0214 10:44:00.397013 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:44:00 crc kubenswrapper[4736]: E0214 10:44:00.408206 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-przcz" podUID="df467c01-3f4e-41c8-b5fa-b14831cfe827" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.396491 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.396511 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.396516 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.399581 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.399917 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.400359 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 10:44:01 crc kubenswrapper[4736]: I0214 10:44:01.401716 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 10:44:02 crc kubenswrapper[4736]: I0214 10:44:02.396284 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:44:02 crc kubenswrapper[4736]: I0214 10:44:02.402135 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 10:44:02 crc kubenswrapper[4736]: I0214 10:44:02.404795 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.064254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.115226 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-68p48"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.115800 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.116646 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.117104 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.118067 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7n6r5"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.119203 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.124931 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7hmxn"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.125560 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.126267 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.126506 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.126719 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.128895 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.135485 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.136519 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.155343 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.155905 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156235 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156415 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156432 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156255 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156288 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156331 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.156642 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157350 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157375 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157477 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157501 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157554 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157638 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157770 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.157831 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.158193 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.158871 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.159583 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.159680 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.160240 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168234 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168270 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168303 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168509 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168593 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168843 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168951 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169029 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168957 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168975 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169000 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169403 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169454 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169361 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169410 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169552 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169588 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169639 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169640 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168260 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.168642 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169827 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169865 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169949 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.169972 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170072 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170186 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170208 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170341 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170414 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.170431 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.171125 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.171281 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.173202 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.173370 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.173958 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7cf7"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.174531 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.175007 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.175058 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tckpd"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.175475 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.175695 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.190165 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.190621 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.190776 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.191032 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.191104 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.191371 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.191474 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.191892 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192313 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192433 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192548 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192631 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.192628 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.195446 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l6bdf"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.195959 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.201662 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.202220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.203188 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.204109 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.204335 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.207652 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.208135 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.208569 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.208669 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.208899 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.209292 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.209485 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.210816 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.211596 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.211602 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.223199 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rwdt7"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.224161 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.224207 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.224874 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.224991 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.225309 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.233508 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.236809 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.249722 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.250085 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.252426 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.253542 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.256307 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv7wg"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.256614 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.257860 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.257920 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.257953 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.258374 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.258648 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.259236 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268300 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-node-pullsecrets\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268348 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268375 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-encryption-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268396 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268418 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54mw\" (UniqueName: \"kubernetes.io/projected/ca613a2a-3f27-44b2-a750-ed66276e5560-kube-api-access-x54mw\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268440 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-trusted-ca\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-config\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-images\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268505 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-service-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268529 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268550 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-serving-cert\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268571 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc4bed44-3d6b-4055-bb84-75071c99aef8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268614 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7gz\" (UniqueName: \"kubernetes.io/projected/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-kube-api-access-mm7gz\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268654 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-audit-dir\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268674 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268693 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca613a2a-3f27-44b2-a750-ed66276e5560-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268733 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268772 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268790 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-image-import-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268811 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e900363-6bcf-4546-88e8-61fb89228809-serving-cert\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268829 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-audit\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268849 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268868 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-config\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268924 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268949 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-client\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268972 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.268995 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269019 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269042 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2206c275-5448-4cfb-bdfb-25180b3c01e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269064 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269086 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnm2\" (UniqueName: \"kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269108 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2206c275-5448-4cfb-bdfb-25180b3c01e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrq8\" (UniqueName: \"kubernetes.io/projected/bc4bed44-3d6b-4055-bb84-75071c99aef8-kube-api-access-xzrq8\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269180 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269202 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269224 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269246 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xg66\" (UniqueName: \"kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269270 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269291 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269336 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnbr\" (UniqueName: \"kubernetes.io/projected/2206c275-5448-4cfb-bdfb-25180b3c01e1-kube-api-access-sgnbr\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269381 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fpn4\" (UniqueName: \"kubernetes.io/projected/71913368-a56a-4e9c-b23b-e6b69f79c110-kube-api-access-5fpn4\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269402 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfmh\" (UniqueName: \"kubernetes.io/projected/9e900363-6bcf-4546-88e8-61fb89228809-kube-api-access-wqfmh\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269446 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-config\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269467 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft47j\" (UniqueName: \"kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269490 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca613a2a-3f27-44b2-a750-ed66276e5560-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269521 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b9d2e-016c-412f-803d-a87a69009268-serving-cert\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269543 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269604 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269626 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmbl\" (UniqueName: \"kubernetes.io/projected/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-kube-api-access-wvmbl\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269671 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzst\" (UniqueName: \"kubernetes.io/projected/842b9d2e-016c-412f-803d-a87a69009268-kube-api-access-htzst\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269694 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269715 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.269889 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.270593 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.270864 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.271017 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.271275 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.276028 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.276501 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.276848 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.277135 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.278790 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.278829 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.278890 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.279566 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.279662 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.280187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.280823 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.288364 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.288946 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.289109 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.289704 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.290149 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.290340 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.290687 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.291316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.291645 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.296083 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.314904 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.319218 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.320639 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.320674 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.321719 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.322635 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7n6r5"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.327373 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.331015 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.335573 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.340816 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.342792 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nhfqg"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.345554 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.346974 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.347024 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.348928 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.350251 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.350945 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.352015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.352727 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.354090 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.355040 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.355640 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.355780 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.356387 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.357378 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.357412 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.357938 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.358097 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-spvg2"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.358988 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.359936 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.360330 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.366181 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.367352 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-459gs"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.368038 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.368419 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7hmxn"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370278 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370736 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmbl\" (UniqueName: \"kubernetes.io/projected/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-kube-api-access-wvmbl\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370774 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzst\" (UniqueName: \"kubernetes.io/projected/842b9d2e-016c-412f-803d-a87a69009268-kube-api-access-htzst\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370810 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370832 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370875 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddd9068-95aa-4e08-bed7-b400152e1766-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370892 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2005f6fa-25f1-421c-9028-5cea529c61be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.370978 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddd9068-95aa-4e08-bed7-b400152e1766-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371016 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmlj\" (UniqueName: \"kubernetes.io/projected/20e06062-7725-4eb4-8a48-7fee4dd1340a-kube-api-access-pwmlj\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371031 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371046 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-node-pullsecrets\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371062 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r9cf\" (UniqueName: \"kubernetes.io/projected/b3550c81-1f31-4800-b399-4168db6f20fc-kube-api-access-7r9cf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371109 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2005f6fa-25f1-421c-9028-5cea529c61be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371123 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-policies\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371140 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-encryption-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371154 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371169 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-trusted-ca\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371184 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-config\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371200 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-images\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371218 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54mw\" (UniqueName: \"kubernetes.io/projected/ca613a2a-3f27-44b2-a750-ed66276e5560-kube-api-access-x54mw\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371237 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-service-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371636 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-node-pullsecrets\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.371255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372191 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372210 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-serving-cert\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc4bed44-3d6b-4055-bb84-75071c99aef8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7gz\" (UniqueName: \"kubernetes.io/projected/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-kube-api-access-mm7gz\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372587 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-audit-dir\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372610 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-encryption-config\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca613a2a-3f27-44b2-a750-ed66276e5560-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372654 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-client\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372686 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-dir\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372719 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372735 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-image-import-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372766 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-audit\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372784 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372799 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e900363-6bcf-4546-88e8-61fb89228809-serving-cert\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372817 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-config\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372862 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372879 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3550c81-1f31-4800-b399-4168db6f20fc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-client\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372914 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372931 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372945 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddd9068-95aa-4e08-bed7-b400152e1766-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373005 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373028 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswgh\" (UniqueName: \"kubernetes.io/projected/178ad6b5-adb5-40a2-9888-52b2a8b01d66-kube-api-access-dswgh\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373063 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373082 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2206c275-5448-4cfb-bdfb-25180b3c01e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373116 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-config\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373142 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnm2\" (UniqueName: \"kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373157 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2206c275-5448-4cfb-bdfb-25180b3c01e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373172 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrq8\" (UniqueName: \"kubernetes.io/projected/bc4bed44-3d6b-4055-bb84-75071c99aef8-kube-api-access-xzrq8\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373200 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373216 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373233 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373248 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373267 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-service-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373282 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-client\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xg66\" (UniqueName: \"kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373337 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373354 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373369 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-serving-cert\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373420 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnbr\" (UniqueName: \"kubernetes.io/projected/2206c275-5448-4cfb-bdfb-25180b3c01e1-kube-api-access-sgnbr\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373450 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-serving-cert\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373469 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fpn4\" (UniqueName: \"kubernetes.io/projected/71913368-a56a-4e9c-b23b-e6b69f79c110-kube-api-access-5fpn4\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373485 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373501 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-images\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373504 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqfmh\" (UniqueName: \"kubernetes.io/projected/9e900363-6bcf-4546-88e8-61fb89228809-kube-api-access-wqfmh\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-config\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca613a2a-3f27-44b2-a750-ed66276e5560-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374023 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b9d2e-016c-412f-803d-a87a69009268-serving-cert\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374053 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374079 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft47j\" (UniqueName: \"kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374100 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374118 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.374139 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2005f6fa-25f1-421c-9028-5cea529c61be-config\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.377425 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2206c275-5448-4cfb-bdfb-25180b3c01e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.377929 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.378325 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-trusted-ca\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.378454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.378551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-service-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.378595 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.378617 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.380694 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.379645 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.380300 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.380357 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.380722 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.381933 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.379374 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-config\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.384597 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.385278 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.385853 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca613a2a-3f27-44b2-a750-ed66276e5560-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.386271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.387789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b9d2e-016c-412f-803d-a87a69009268-config\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.372546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.386931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71913368-a56a-4e9c-b23b-e6b69f79c110-audit-dir\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.387631 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca613a2a-3f27-44b2-a750-ed66276e5560-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.388366 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-encryption-config\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.388677 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.386828 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.389262 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2206c275-5448-4cfb-bdfb-25180b3c01e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.373484 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e900363-6bcf-4546-88e8-61fb89228809-config\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.390325 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-image-import-ca\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.390534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.390862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.391270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71913368-a56a-4e9c-b23b-e6b69f79c110-audit\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.391679 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.392472 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.392854 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.393884 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e900363-6bcf-4546-88e8-61fb89228809-serving-cert\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.394072 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.394549 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-etcd-client\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.395115 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.395359 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.396566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71913368-a56a-4e9c-b23b-e6b69f79c110-serving-cert\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.396971 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.397417 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b9d2e-016c-412f-803d-a87a69009268-serving-cert\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.399162 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.399322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.399677 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.401322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc4bed44-3d6b-4055-bb84-75071c99aef8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.410154 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5gl7g"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.414052 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.415768 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.415809 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.415824 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.415842 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.415907 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.417335 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.417450 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nhfqg"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.418789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.419866 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.422700 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.424488 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7cf7"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.426123 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l6bdf"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.429251 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.429287 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rwdt7"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.430286 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.431489 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.433025 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.433823 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.435087 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.436120 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2jk7h"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.437445 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.437718 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.438056 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.439039 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.441498 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.443610 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-68p48"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.445303 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv7wg"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.446384 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-j4mwm"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.447239 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.452709 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.452810 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.452828 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.459674 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.459787 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.461676 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-spvg2"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.464023 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.465099 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tckpd"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.466132 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2jk7h"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.467151 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.468246 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.469399 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5gl7g"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.470736 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dddp5"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.471260 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.472018 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dddp5"] Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474767 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474800 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-encryption-config\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-client\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474852 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-dir\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474874 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3550c81-1f31-4800-b399-4168db6f20fc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddd9068-95aa-4e08-bed7-b400152e1766-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474926 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dswgh\" (UniqueName: \"kubernetes.io/projected/178ad6b5-adb5-40a2-9888-52b2a8b01d66-kube-api-access-dswgh\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474936 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-dir\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.474944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-config\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475084 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-service-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475111 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-client\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-serving-cert\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475163 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-serving-cert\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475219 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2005f6fa-25f1-421c-9028-5cea529c61be-config\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475256 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddd9068-95aa-4e08-bed7-b400152e1766-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475274 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2005f6fa-25f1-421c-9028-5cea529c61be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddd9068-95aa-4e08-bed7-b400152e1766-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmlj\" (UniqueName: \"kubernetes.io/projected/20e06062-7725-4eb4-8a48-7fee4dd1340a-kube-api-access-pwmlj\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475365 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r9cf\" (UniqueName: \"kubernetes.io/projected/b3550c81-1f31-4800-b399-4168db6f20fc-kube-api-access-7r9cf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475387 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-policies\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475413 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2005f6fa-25f1-421c-9028-5cea529c61be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.475889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.476140 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.477616 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.478117 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e06062-7725-4eb4-8a48-7fee4dd1340a-audit-policies\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.478705 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddd9068-95aa-4e08-bed7-b400152e1766-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.478931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-etcd-client\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.480072 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2005f6fa-25f1-421c-9028-5cea529c61be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.480785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-encryption-config\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.483273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e06062-7725-4eb4-8a48-7fee4dd1340a-serving-cert\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.487925 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-client\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.497695 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.506335 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-service-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.517826 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.537913 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.558484 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.565877 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-config\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.580398 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.598313 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.609927 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178ad6b5-adb5-40a2-9888-52b2a8b01d66-serving-cert\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.617853 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.625568 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/178ad6b5-adb5-40a2-9888-52b2a8b01d66-etcd-ca\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.638020 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.658238 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.678645 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.687055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2005f6fa-25f1-421c-9028-5cea529c61be-config\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.698635 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.706696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddd9068-95aa-4e08-bed7-b400152e1766-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.719427 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.730419 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3550c81-1f31-4800-b399-4168db6f20fc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.738785 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.779106 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.806575 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.818706 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.839538 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.859361 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.879714 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.898930 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.918513 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.937914 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.959539 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.979253 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 10:44:09 crc kubenswrapper[4736]: I0214 10:44:09.999633 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.019790 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.038088 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.059677 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.078113 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.097879 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.118329 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.138611 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.158489 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.179243 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.199035 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.217663 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.238992 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.259053 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.278806 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.298689 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.319237 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.338102 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.356868 4736 request.go:700] Waited for 1.005522321s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.358802 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.378493 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.398391 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.428262 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.438344 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.458557 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.478466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.498223 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.518515 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.538835 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.559161 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.578153 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.598605 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.618537 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.639020 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.658047 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.677644 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.697669 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.718337 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.738433 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.757565 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.778301 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.797926 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.817936 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.838422 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.858851 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.878856 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.898835 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.917956 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.959446 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmbl\" (UniqueName: \"kubernetes.io/projected/b305d178-1f44-4e74-9a0f-9a6c95fb4c45-kube-api-access-wvmbl\") pod \"machine-api-operator-5694c8668f-68p48\" (UID: \"b305d178-1f44-4e74-9a0f-9a6c95fb4c45\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.974789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzst\" (UniqueName: \"kubernetes.io/projected/842b9d2e-016c-412f-803d-a87a69009268-kube-api-access-htzst\") pod \"console-operator-58897d9998-tckpd\" (UID: \"842b9d2e-016c-412f-803d-a87a69009268\") " pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:10 crc kubenswrapper[4736]: I0214 10:44:10.994433 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqfmh\" (UniqueName: \"kubernetes.io/projected/9e900363-6bcf-4546-88e8-61fb89228809-kube-api-access-wqfmh\") pod \"authentication-operator-69f744f599-7n6r5\" (UID: \"9e900363-6bcf-4546-88e8-61fb89228809\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.016813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54mw\" (UniqueName: \"kubernetes.io/projected/ca613a2a-3f27-44b2-a750-ed66276e5560-kube-api-access-x54mw\") pod \"openshift-apiserver-operator-796bbdcf4f-8bcqb\" (UID: \"ca613a2a-3f27-44b2-a750-ed66276e5560\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.036401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft47j\" (UniqueName: \"kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j\") pod \"oauth-openshift-558db77b4-7hmxn\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.053563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnbr\" (UniqueName: \"kubernetes.io/projected/2206c275-5448-4cfb-bdfb-25180b3c01e1-kube-api-access-sgnbr\") pod \"openshift-config-operator-7777fb866f-qr5lk\" (UID: \"2206c275-5448-4cfb-bdfb-25180b3c01e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.072354 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xg66\" (UniqueName: \"kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66\") pod \"console-f9d7485db-r4f7j\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.093929 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.097322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrq8\" (UniqueName: \"kubernetes.io/projected/bc4bed44-3d6b-4055-bb84-75071c99aef8-kube-api-access-xzrq8\") pod \"cluster-samples-operator-665b6dd947-dxngl\" (UID: \"bc4bed44-3d6b-4055-bb84-75071c99aef8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.113482 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fpn4\" (UniqueName: \"kubernetes.io/projected/71913368-a56a-4e9c-b23b-e6b69f79c110-kube-api-access-5fpn4\") pod \"apiserver-76f77b778f-z7cf7\" (UID: \"71913368-a56a-4e9c-b23b-e6b69f79c110\") " pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.120642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.134006 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7gz\" (UniqueName: \"kubernetes.io/projected/484eab2e-2a8d-45f4-ada7-92639c4e6bcb-kube-api-access-mm7gz\") pod \"openshift-controller-manager-operator-756b6f6bc6-thpxl\" (UID: \"484eab2e-2a8d-45f4-ada7-92639c4e6bcb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.142576 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.180983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.183916 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnm2\" (UniqueName: \"kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2\") pod \"route-controller-manager-6576b87f9c-rd5vr\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.198809 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.218736 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.238668 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.243758 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.258541 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.263841 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.281351 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.281703 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.294104 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.300463 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.317564 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.336058 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.339057 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.345543 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.357660 4736 request.go:700] Waited for 1.886213803s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.371160 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.371332 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.377651 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.380424 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.412166 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.412928 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.423262 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.429216 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7cf7"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.463854 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dswgh\" (UniqueName: \"kubernetes.io/projected/178ad6b5-adb5-40a2-9888-52b2a8b01d66-kube-api-access-dswgh\") pod \"etcd-operator-b45778765-nv7wg\" (UID: \"178ad6b5-adb5-40a2-9888-52b2a8b01d66\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.475329 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2005f6fa-25f1-421c-9028-5cea529c61be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wkxgj\" (UID: \"2005f6fa-25f1-421c-9028-5cea529c61be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:11 crc kubenswrapper[4736]: W0214 10:44:11.494319 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca613a2a_3f27_44b2_a750_ed66276e5560.slice/crio-1ab5d74e685664dbf9f83d293bdaee6103441cc59b21b6013b8e6f4a8b5d2c11 WatchSource:0}: Error finding container 1ab5d74e685664dbf9f83d293bdaee6103441cc59b21b6013b8e6f4a8b5d2c11: Status 404 returned error can't find the container with id 1ab5d74e685664dbf9f83d293bdaee6103441cc59b21b6013b8e6f4a8b5d2c11 Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.509061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmlj\" (UniqueName: \"kubernetes.io/projected/20e06062-7725-4eb4-8a48-7fee4dd1340a-kube-api-access-pwmlj\") pod \"apiserver-7bbb656c7d-666xs\" (UID: \"20e06062-7725-4eb4-8a48-7fee4dd1340a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.510451 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.516500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ddd9068-95aa-4e08-bed7-b400152e1766-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9jr\" (UID: \"4ddd9068-95aa-4e08-bed7-b400152e1766\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.537230 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.548037 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.548805 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r9cf\" (UniqueName: \"kubernetes.io/projected/b3550c81-1f31-4800-b399-4168db6f20fc-kube-api-access-7r9cf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8bxbt\" (UID: \"b3550c81-1f31-4800-b399-4168db6f20fc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.556415 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tckpd"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.557920 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.562719 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-68p48"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.564365 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613157 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-auth-proxy-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613190 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613211 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613316 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pd4t\" (UniqueName: \"kubernetes.io/projected/3b2caf1c-b536-4518-a8fa-966eb348bad7-kube-api-access-2pd4t\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613344 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613374 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613401 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613431 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613447 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwbm\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613460 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613503 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s75ln\" (UniqueName: \"kubernetes.io/projected/4c385175-c749-4eeb-9d28-eedb51937337-kube-api-access-s75ln\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613553 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wnck\" (UniqueName: \"kubernetes.io/projected/0dac6876-5757-41e4-88ac-a640e67b013e-kube-api-access-7wnck\") pod \"downloads-7954f5f757-l6bdf\" (UID: \"0dac6876-5757-41e4-88ac-a640e67b013e\") " pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613586 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613601 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb947\" (UniqueName: \"kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613634 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c385175-c749-4eeb-9d28-eedb51937337-machine-approver-tls\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613657 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.613695 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b2caf1c-b536-4518-a8fa-966eb348bad7-metrics-tls\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: E0214 10:44:11.616487 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.116474682 +0000 UTC m=+162.485102050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:11 crc kubenswrapper[4736]: W0214 10:44:11.672468 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod842b9d2e_016c_412f_803d_a87a69009268.slice/crio-5633813dcf87e1182c559ac09ce33453bb341605e649a2209c1d7aa3f9f5b5fd WatchSource:0}: Error finding container 5633813dcf87e1182c559ac09ce33453bb341605e649a2209c1d7aa3f9f5b5fd: Status 404 returned error can't find the container with id 5633813dcf87e1182c559ac09ce33453bb341605e649a2209c1d7aa3f9f5b5fd Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.714836 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c385175-c749-4eeb-9d28-eedb51937337-machine-approver-tls\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: E0214 10:44:11.715069 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.215041249 +0000 UTC m=+162.583668617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715106 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c696bd02-3fce-484b-a793-efb4e593bff6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715141 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-node-bootstrap-token\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715166 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715183 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b2caf1c-b536-4518-a8fa-966eb348bad7-metrics-tls\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715199 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715216 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxct\" (UniqueName: \"kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715230 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tclh2\" (UniqueName: \"kubernetes.io/projected/43f22db7-b110-4a1a-823b-888bb2768191-kube-api-access-tclh2\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715246 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvsq\" (UniqueName: \"kubernetes.io/projected/51168ccc-7cf4-4efe-a67b-049d4072b5c0-kube-api-access-6gvsq\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715262 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfwm\" (UniqueName: \"kubernetes.io/projected/0d5941c4-bec5-44db-aa12-d89e5ef34609-kube-api-access-xlfwm\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715279 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5941c4-bec5-44db-aa12-d89e5ef34609-cert\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715295 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xn7\" (UniqueName: \"kubernetes.io/projected/c3947381-3e2c-4e4e-bb22-5e7d0494222c-kube-api-access-q9xn7\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715313 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-trusted-ca\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715327 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad6d5e20-f083-4fc5-8856-234465465c02-service-ca-bundle\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51168ccc-7cf4-4efe-a67b-049d4072b5c0-tmpfs\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715372 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlt67\" (UniqueName: \"kubernetes.io/projected/763fa8c1-f41f-4dea-b69d-98133a1357d2-kube-api-access-qlt67\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715402 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-auth-proxy-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715422 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-mountpoint-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715439 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c77f606-891d-4408-adc0-f27624c5de0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715458 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2lz5\" (UniqueName: \"kubernetes.io/projected/3c77f606-891d-4408-adc0-f27624c5de0c-kube-api-access-j2lz5\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c77f606-891d-4408-adc0-f27624c5de0c-proxy-tls\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715511 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pd4t\" (UniqueName: \"kubernetes.io/projected/3b2caf1c-b536-4518-a8fa-966eb348bad7-kube-api-access-2pd4t\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715533 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715550 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f22db7-b110-4a1a-823b-888bb2768191-config-volume\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715577 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.715601 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: E0214 10:44:11.715913 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.215894062 +0000 UTC m=+162.584521500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.716923 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7xb\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-kube-api-access-jg7xb\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.716951 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.716984 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34882bf0-6f91-4319-be98-ff12b0bcf393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717270 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717336 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb02e73f-f113-47f6-99cd-674686f3ad56-config\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717352 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-metrics-tls\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717475 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717501 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzc8\" (UniqueName: \"kubernetes.io/projected/fb02e73f-f113-47f6-99cd-674686f3ad56-kube-api-access-fqzc8\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717530 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8662q\" (UniqueName: \"kubernetes.io/projected/2bdcca22-6b12-440b-9247-b216d1d45071-kube-api-access-8662q\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717553 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb97d53-c3be-4c23-a56a-ca182e70ad0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wnck\" (UniqueName: \"kubernetes.io/projected/0dac6876-5757-41e4-88ac-a640e67b013e-kube-api-access-7wnck\") pod \"downloads-7954f5f757-l6bdf\" (UID: \"0dac6876-5757-41e4-88ac-a640e67b013e\") " pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717590 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717603 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb85p\" (UniqueName: \"kubernetes.io/projected/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-kube-api-access-mb85p\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717626 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717639 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bd02-3fce-484b-a793-efb4e593bff6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717679 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f22db7-b110-4a1a-823b-888bb2768191-metrics-tls\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717695 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-srv-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717714 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717728 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-plugins-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717763 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-certs\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717779 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-csi-data-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717805 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3947381-3e2c-4e4e-bb22-5e7d0494222c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717822 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bd02-3fce-484b-a793-efb4e593bff6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2s9j\" (UniqueName: \"kubernetes.io/projected/beb97d53-c3be-4c23-a56a-ca182e70ad0b-kube-api-access-q2s9j\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717861 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717887 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/26701b87-9d9f-4444-8f65-6e645daa1714-signing-cabundle\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717901 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-metrics-certs\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717915 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717930 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717965 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrz5m\" (UniqueName: \"kubernetes.io/projected/34882bf0-6f91-4319-be98-ff12b0bcf393-kube-api-access-qrz5m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717991 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718015 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlgw\" (UniqueName: \"kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718029 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718063 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718078 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718103 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34882bf0-6f91-4319-be98-ff12b0bcf393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718505 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.718640 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.719293 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-auth-proxy-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720123 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720218 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwbm\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720246 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-srv-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720366 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-webhook-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24n5g\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-kube-api-access-24n5g\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb02e73f-f113-47f6-99cd-674686f3ad56-serving-cert\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720958 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2f7\" (UniqueName: \"kubernetes.io/projected/9df4c287-aa48-47d1-86b3-156b92993310-kube-api-access-pl2f7\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721057 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.720981 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-socket-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s75ln\" (UniqueName: \"kubernetes.io/projected/4c385175-c749-4eeb-9d28-eedb51937337-kube-api-access-s75ln\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721268 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnskr\" (UniqueName: \"kubernetes.io/projected/26701b87-9d9f-4444-8f65-6e645daa1714-kube-api-access-wnskr\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721288 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-stats-auth\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721305 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-images\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721359 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721360 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c385175-c749-4eeb-9d28-eedb51937337-config\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721375 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-default-certificate\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.721442 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft2gd\" (UniqueName: \"kubernetes.io/projected/13b130be-039d-41d0-8c39-0137921f99ab-kube-api-access-ft2gd\") pod \"migrator-59844c95c7-lclrw\" (UID: \"13b130be-039d-41d0-8c39-0137921f99ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/26701b87-9d9f-4444-8f65-6e645daa1714-signing-key\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722426 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvk2h\" (UniqueName: \"kubernetes.io/projected/547644c0-393d-4c6d-b2dc-94c587cd9bfd-kube-api-access-pvk2h\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722444 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctqg\" (UniqueName: \"kubernetes.io/projected/ad6d5e20-f083-4fc5-8856-234465465c02-kube-api-access-5ctqg\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722494 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.717004 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722611 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-registration-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722691 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb947\" (UniqueName: \"kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.722846 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/763fa8c1-f41f-4dea-b69d-98133a1357d2-proxy-tls\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.724155 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.728817 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.730421 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.730874 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b2caf1c-b536-4518-a8fa-966eb348bad7-metrics-tls\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.731198 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c385175-c749-4eeb-9d28-eedb51937337-machine-approver-tls\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.731987 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.745005 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.752616 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.776432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pd4t\" (UniqueName: \"kubernetes.io/projected/3b2caf1c-b536-4518-a8fa-966eb348bad7-kube-api-access-2pd4t\") pod \"dns-operator-744455d44c-rwdt7\" (UID: \"3b2caf1c-b536-4518-a8fa-966eb348bad7\") " pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.799683 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wnck\" (UniqueName: \"kubernetes.io/projected/0dac6876-5757-41e4-88ac-a640e67b013e-kube-api-access-7wnck\") pod \"downloads-7954f5f757-l6bdf\" (UID: \"0dac6876-5757-41e4-88ac-a640e67b013e\") " pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.818440 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwbm\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.828540 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829156 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/763fa8c1-f41f-4dea-b69d-98133a1357d2-proxy-tls\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c696bd02-3fce-484b-a793-efb4e593bff6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829235 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829251 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxct\" (UniqueName: \"kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829282 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-node-bootstrap-token\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829298 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tclh2\" (UniqueName: \"kubernetes.io/projected/43f22db7-b110-4a1a-823b-888bb2768191-kube-api-access-tclh2\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlfwm\" (UniqueName: \"kubernetes.io/projected/0d5941c4-bec5-44db-aa12-d89e5ef34609-kube-api-access-xlfwm\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829333 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gvsq\" (UniqueName: \"kubernetes.io/projected/51168ccc-7cf4-4efe-a67b-049d4072b5c0-kube-api-access-6gvsq\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829371 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xn7\" (UniqueName: \"kubernetes.io/projected/c3947381-3e2c-4e4e-bb22-5e7d0494222c-kube-api-access-q9xn7\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829389 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5941c4-bec5-44db-aa12-d89e5ef34609-cert\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829404 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-trusted-ca\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829436 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad6d5e20-f083-4fc5-8856-234465465c02-service-ca-bundle\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51168ccc-7cf4-4efe-a67b-049d4072b5c0-tmpfs\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829468 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlt67\" (UniqueName: \"kubernetes.io/projected/763fa8c1-f41f-4dea-b69d-98133a1357d2-kube-api-access-qlt67\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829485 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-mountpoint-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829516 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c77f606-891d-4408-adc0-f27624c5de0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2lz5\" (UniqueName: \"kubernetes.io/projected/3c77f606-891d-4408-adc0-f27624c5de0c-kube-api-access-j2lz5\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829555 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c77f606-891d-4408-adc0-f27624c5de0c-proxy-tls\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829609 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f22db7-b110-4a1a-823b-888bb2768191-config-volume\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg7xb\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-kube-api-access-jg7xb\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829676 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34882bf0-6f91-4319-be98-ff12b0bcf393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829696 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829712 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb02e73f-f113-47f6-99cd-674686f3ad56-config\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829727 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-metrics-tls\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829773 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829790 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzc8\" (UniqueName: \"kubernetes.io/projected/fb02e73f-f113-47f6-99cd-674686f3ad56-kube-api-access-fqzc8\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829806 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8662q\" (UniqueName: \"kubernetes.io/projected/2bdcca22-6b12-440b-9247-b216d1d45071-kube-api-access-8662q\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829842 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb97d53-c3be-4c23-a56a-ca182e70ad0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829867 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb85p\" (UniqueName: \"kubernetes.io/projected/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-kube-api-access-mb85p\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829955 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bd02-3fce-484b-a793-efb4e593bff6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.829989 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f22db7-b110-4a1a-823b-888bb2768191-metrics-tls\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832865 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-srv-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832903 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-plugins-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832926 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-certs\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832952 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-csi-data-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3947381-3e2c-4e4e-bb22-5e7d0494222c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.832993 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bd02-3fce-484b-a793-efb4e593bff6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833017 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2s9j\" (UniqueName: \"kubernetes.io/projected/beb97d53-c3be-4c23-a56a-ca182e70ad0b-kube-api-access-q2s9j\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833040 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833064 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/26701b87-9d9f-4444-8f65-6e645daa1714-signing-cabundle\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833079 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-metrics-certs\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833144 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833183 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrz5m\" (UniqueName: \"kubernetes.io/projected/34882bf0-6f91-4319-be98-ff12b0bcf393-kube-api-access-qrz5m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833220 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833246 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlgw\" (UniqueName: \"kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833280 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833306 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34882bf0-6f91-4319-be98-ff12b0bcf393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833352 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-srv-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833375 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-webhook-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833402 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24n5g\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-kube-api-access-24n5g\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833423 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2f7\" (UniqueName: \"kubernetes.io/projected/9df4c287-aa48-47d1-86b3-156b92993310-kube-api-access-pl2f7\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb02e73f-f113-47f6-99cd-674686f3ad56-serving-cert\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833462 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-socket-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833477 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-stats-auth\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833493 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-images\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833512 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnskr\" (UniqueName: \"kubernetes.io/projected/26701b87-9d9f-4444-8f65-6e645daa1714-kube-api-access-wnskr\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833530 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-default-certificate\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft2gd\" (UniqueName: \"kubernetes.io/projected/13b130be-039d-41d0-8c39-0137921f99ab-kube-api-access-ft2gd\") pod \"migrator-59844c95c7-lclrw\" (UID: \"13b130be-039d-41d0-8c39-0137921f99ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833594 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/26701b87-9d9f-4444-8f65-6e645daa1714-signing-key\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833610 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvk2h\" (UniqueName: \"kubernetes.io/projected/547644c0-393d-4c6d-b2dc-94c587cd9bfd-kube-api-access-pvk2h\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctqg\" (UniqueName: \"kubernetes.io/projected/ad6d5e20-f083-4fc5-8856-234465465c02-kube-api-access-5ctqg\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-registration-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.833820 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.834224 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb02e73f-f113-47f6-99cd-674686f3ad56-config\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: E0214 10:44:11.834316 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.334300964 +0000 UTC m=+162.702928392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.838740 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-images\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.843860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-registration-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.844656 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34882bf0-6f91-4319-be98-ff12b0bcf393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.845055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.845990 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-socket-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.846277 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.858508 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.866618 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad6d5e20-f083-4fc5-8856-234465465c02-service-ca-bundle\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.867959 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-node-bootstrap-token\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.868424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.868966 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.907654 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-trusted-ca\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.907858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb02e73f-f113-47f6-99cd-674686f3ad56-serving-cert\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.907896 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/763fa8c1-f41f-4dea-b69d-98133a1357d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.907979 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5941c4-bec5-44db-aa12-d89e5ef34609-cert\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.908484 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.909890 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/26701b87-9d9f-4444-8f65-6e645daa1714-signing-cabundle\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.910112 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-srv-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.910552 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51168ccc-7cf4-4efe-a67b-049d4072b5c0-tmpfs\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.910719 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-mountpoint-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.911555 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c77f606-891d-4408-adc0-f27624c5de0c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.911648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-metrics-certs\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.914553 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bd02-3fce-484b-a793-efb4e593bff6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.921240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s75ln\" (UniqueName: \"kubernetes.io/projected/4c385175-c749-4eeb-9d28-eedb51937337-kube-api-access-s75ln\") pod \"machine-approver-56656f9798-xg87q\" (UID: \"4c385175-c749-4eeb-9d28-eedb51937337\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.921713 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/763fa8c1-f41f-4dea-b69d-98133a1357d2-proxy-tls\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.922243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-metrics-tls\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.922732 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/547644c0-393d-4c6d-b2dc-94c587cd9bfd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.932425 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb947\" (UniqueName: \"kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947\") pod \"controller-manager-879f6c89f-brfbh\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.932991 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-csi-data-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.941965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3947381-3e2c-4e4e-bb22-5e7d0494222c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.942858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.943489 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9df4c287-aa48-47d1-86b3-156b92993310-plugins-dir\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.943681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51168ccc-7cf4-4efe-a67b-049d4072b5c0-webhook-cert\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.943720 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f22db7-b110-4a1a-823b-888bb2768191-config-volume\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.944542 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.945083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb97d53-c3be-4c23-a56a-ca182e70ad0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.946775 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34882bf0-6f91-4319-be98-ff12b0bcf393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.946830 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.947332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-default-certificate\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.958949 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7hmxn"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.958998 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7n6r5"] Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.984199 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:11 crc kubenswrapper[4736]: E0214 10:44:11.984726 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.484706974 +0000 UTC m=+162.853334342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.985067 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bd02-3fce-484b-a793-efb4e593bff6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:11 crc kubenswrapper[4736]: I0214 10:44:11.987297 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.002313 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.006204 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctqg\" (UniqueName: \"kubernetes.io/projected/ad6d5e20-f083-4fc5-8856-234465465c02-kube-api-access-5ctqg\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.014075 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvk2h\" (UniqueName: \"kubernetes.io/projected/547644c0-393d-4c6d-b2dc-94c587cd9bfd-kube-api-access-pvk2h\") pod \"olm-operator-6b444d44fb-7z8zv\" (UID: \"547644c0-393d-4c6d-b2dc-94c587cd9bfd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.017897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxct\" (UniqueName: \"kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct\") pod \"collect-profiles-29517750-rkwd6\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.024368 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad6d5e20-f083-4fc5-8856-234465465c02-stats-auth\") pod \"router-default-5444994796-459gs\" (UID: \"ad6d5e20-f083-4fc5-8856-234465465c02\") " pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.025049 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.029347 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/26701b87-9d9f-4444-8f65-6e645daa1714-signing-key\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.030580 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-srv-cert\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.030985 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tclh2\" (UniqueName: \"kubernetes.io/projected/43f22db7-b110-4a1a-823b-888bb2768191-kube-api-access-tclh2\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.035084 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c77f606-891d-4408-adc0-f27624c5de0c-proxy-tls\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.035706 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2bdcca22-6b12-440b-9247-b216d1d45071-certs\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.036923 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlgw\" (UniqueName: \"kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw\") pod \"marketplace-operator-79b997595-v52bz\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.037470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f22db7-b110-4a1a-823b-888bb2768191-metrics-tls\") pod \"dns-default-2jk7h\" (UID: \"43f22db7-b110-4a1a-823b-888bb2768191\") " pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.037736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c696bd02-3fce-484b-a793-efb4e593bff6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmq7j\" (UID: \"c696bd02-3fce-484b-a793-efb4e593bff6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.037911 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrz5m\" (UniqueName: \"kubernetes.io/projected/34882bf0-6f91-4319-be98-ff12b0bcf393-kube-api-access-qrz5m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hq6m9\" (UID: \"34882bf0-6f91-4319-be98-ff12b0bcf393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.038411 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.052713 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.063506 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.064214 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gvsq\" (UniqueName: \"kubernetes.io/projected/51168ccc-7cf4-4efe-a67b-049d4072b5c0-kube-api-access-6gvsq\") pod \"packageserver-d55dfcdfc-wt64j\" (UID: \"51168ccc-7cf4-4efe-a67b-049d4072b5c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.065665 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlfwm\" (UniqueName: \"kubernetes.io/projected/0d5941c4-bec5-44db-aa12-d89e5ef34609-kube-api-access-xlfwm\") pod \"ingress-canary-dddp5\" (UID: \"0d5941c4-bec5-44db-aa12-d89e5ef34609\") " pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.068018 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.071512 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv7wg"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.085500 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.086048 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.586029866 +0000 UTC m=+162.954657234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.088429 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xn7\" (UniqueName: \"kubernetes.io/projected/c3947381-3e2c-4e4e-bb22-5e7d0494222c-kube-api-access-q9xn7\") pod \"package-server-manager-789f6589d5-f6vrk\" (UID: \"c3947381-3e2c-4e4e-bb22-5e7d0494222c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.100666 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24n5g\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-kube-api-access-24n5g\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.103697 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.111936 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.114412 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2f7\" (UniqueName: \"kubernetes.io/projected/9df4c287-aa48-47d1-86b3-156b92993310-kube-api-access-pl2f7\") pod \"csi-hostpathplugin-5gl7g\" (UID: \"9df4c287-aa48-47d1-86b3-156b92993310\") " pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.124608 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" Feb 14 10:44:12 crc kubenswrapper[4736]: W0214 10:44:12.134702 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod178ad6b5_adb5_40a2_9888_52b2a8b01d66.slice/crio-c168b6cd493c320feba20b384b81cae1c3252bbb13bc5a23b33b04e5f2fd99c5 WatchSource:0}: Error finding container c168b6cd493c320feba20b384b81cae1c3252bbb13bc5a23b33b04e5f2fd99c5: Status 404 returned error can't find the container with id c168b6cd493c320feba20b384b81cae1c3252bbb13bc5a23b33b04e5f2fd99c5 Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.140134 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzc8\" (UniqueName: \"kubernetes.io/projected/fb02e73f-f113-47f6-99cd-674686f3ad56-kube-api-access-fqzc8\") pod \"service-ca-operator-777779d784-b8g8r\" (UID: \"fb02e73f-f113-47f6-99cd-674686f3ad56\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.147058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.152782 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8662q\" (UniqueName: \"kubernetes.io/projected/2bdcca22-6b12-440b-9247-b216d1d45071-kube-api-access-8662q\") pod \"machine-config-server-j4mwm\" (UID: \"2bdcca22-6b12-440b-9247-b216d1d45071\") " pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.177999 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.181280 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2s9j\" (UniqueName: \"kubernetes.io/projected/beb97d53-c3be-4c23-a56a-ca182e70ad0b-kube-api-access-q2s9j\") pod \"multus-admission-controller-857f4d67dd-nhfqg\" (UID: \"beb97d53-c3be-4c23-a56a-ca182e70ad0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.188045 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.188515 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.688495089 +0000 UTC m=+163.057122457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.208887 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.209009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb85p\" (UniqueName: \"kubernetes.io/projected/92ef3f92-697f-4e3a-a2c7-e5dc4d10983e-kube-api-access-mb85p\") pod \"catalog-operator-68c6474976-2xp2x\" (UID: \"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.215608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlt67\" (UniqueName: \"kubernetes.io/projected/763fa8c1-f41f-4dea-b69d-98133a1357d2-kube-api-access-qlt67\") pod \"machine-config-operator-74547568cd-qk4ch\" (UID: \"763fa8c1-f41f-4dea-b69d-98133a1357d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.218678 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.222709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" event={"ID":"ca613a2a-3f27-44b2-a750-ed66276e5560","Type":"ContainerStarted","Data":"7be15a6dae211792ae953b0d9eb07b880d9939be3a206f631c73765f41a5da31"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.222804 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" event={"ID":"ca613a2a-3f27-44b2-a750-ed66276e5560","Type":"ContainerStarted","Data":"1ab5d74e685664dbf9f83d293bdaee6103441cc59b21b6013b8e6f4a8b5d2c11"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.224489 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" event={"ID":"484eab2e-2a8d-45f4-ada7-92639c4e6bcb","Type":"ContainerStarted","Data":"d170eb2ec80d6ba5cb3c22327b18f8b61efb744c877d6afa6f772c226ab271eb"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.225963 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" event={"ID":"4ddd9068-95aa-4e08-bed7-b400152e1766","Type":"ContainerStarted","Data":"9cf314528b9019de0d74d77e7aa5c2a35f797753463a7b4a52f69469ec81f9f7"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.226133 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.230024 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rwdt7"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.232178 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.236244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tckpd" event={"ID":"842b9d2e-016c-412f-803d-a87a69009268","Type":"ContainerStarted","Data":"7d20a764a0aa054c855785ce8f93204804eb66104daaa66f5830eae561e5aae3"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.236294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tckpd" event={"ID":"842b9d2e-016c-412f-803d-a87a69009268","Type":"ContainerStarted","Data":"5633813dcf87e1182c559ac09ce33453bb341605e649a2209c1d7aa3f9f5b5fd"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.236424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2lz5\" (UniqueName: \"kubernetes.io/projected/3c77f606-891d-4408-adc0-f27624c5de0c-kube-api-access-j2lz5\") pod \"machine-config-controller-84d6567774-nsdsx\" (UID: \"3c77f606-891d-4408-adc0-f27624c5de0c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.236686 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.239890 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-tckpd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/readyz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.239927 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tckpd" podUID="842b9d2e-016c-412f-803d-a87a69009268" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/readyz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.243731 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.244302 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.249934 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.257671 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.270166 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.273306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" event={"ID":"bc4bed44-3d6b-4055-bb84-75071c99aef8","Type":"ContainerStarted","Data":"24ae5af21d8e87185483b6592119991b8b21c1ff20395cc654ab5e84277da9fb"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.277023 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" event={"ID":"9e900363-6bcf-4546-88e8-61fb89228809","Type":"ContainerStarted","Data":"450c009090b55c5d27cb3cfe06ac718c57e35d94a5c79f3197494f39e9f515ba"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.278523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.282257 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r4f7j" event={"ID":"19ffdb45-8f94-48d2-93f8-b139825d4063","Type":"ContainerStarted","Data":"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.282296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r4f7j" event={"ID":"19ffdb45-8f94-48d2-93f8-b139825d4063","Type":"ContainerStarted","Data":"b5767418c8902edd803a86a85b49c26377a6d5ad7e019c9951f2288e46c940f7"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.283764 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" event={"ID":"21d3351d-662a-4e3e-b7fa-f7eb332a1506","Type":"ContainerStarted","Data":"8f10675700b070c45265bd55babd87e783c9feae06d73173f691b96abb82ba31"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.290146 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.291321 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.791298442 +0000 UTC m=+163.159925820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.294895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" event={"ID":"b305d178-1f44-4e74-9a0f-9a6c95fb4c45","Type":"ContainerStarted","Data":"a9baff4c359eed216dcc3a6c111bd715c7a3e2496496929e435c18cc87dc8f57"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.294929 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" event={"ID":"b305d178-1f44-4e74-9a0f-9a6c95fb4c45","Type":"ContainerStarted","Data":"4ea5582c706992df97fb0174fab9e85ba287753d0cbd33a877740435873a558b"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.295266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.295937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg7xb\" (UniqueName: \"kubernetes.io/projected/6d5c2471-9df8-4ce4-a64b-2ef892d3af94-kube-api-access-jg7xb\") pod \"ingress-operator-5b745b69d9-tw8qx\" (UID: \"6d5c2471-9df8-4ce4-a64b-2ef892d3af94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.301833 4736 generic.go:334] "Generic (PLEG): container finished" podID="71913368-a56a-4e9c-b23b-e6b69f79c110" containerID="61baa5e9afde8a91379a86e46bd644b1696df86db84561b2171381eb690d9d58" exitCode=0 Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.301906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" event={"ID":"71913368-a56a-4e9c-b23b-e6b69f79c110","Type":"ContainerDied","Data":"61baa5e9afde8a91379a86e46bd644b1696df86db84561b2171381eb690d9d58"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.301932 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" event={"ID":"71913368-a56a-4e9c-b23b-e6b69f79c110","Type":"ContainerStarted","Data":"28f028006527fc0f873c3a7bbe437e26fd3ce899a2d0077a49a2fe137028f2d4"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.305036 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.313208 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2jk7h"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.314169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" event={"ID":"2206c275-5448-4cfb-bdfb-25180b3c01e1","Type":"ContainerStarted","Data":"d4983fb6c90e8486de10d343764976c70a8545165d5750bb983161cc7732c220"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.315367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" event={"ID":"446f17e4-455e-45ae-affc-f27215421058","Type":"ContainerStarted","Data":"9599e5f834240a82060e45653f21801d00ddd2626e1ba63b32ec874689215503"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.317226 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnskr\" (UniqueName: \"kubernetes.io/projected/26701b87-9d9f-4444-8f65-6e645daa1714-kube-api-access-wnskr\") pod \"service-ca-9c57cc56f-spvg2\" (UID: \"26701b87-9d9f-4444-8f65-6e645daa1714\") " pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.336572 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53f352c8-f830-4ffb-8cb4-8f02ab4221d1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2pf7h\" (UID: \"53f352c8-f830-4ffb-8cb4-8f02ab4221d1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.345429 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.353131 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft2gd\" (UniqueName: \"kubernetes.io/projected/13b130be-039d-41d0-8c39-0137921f99ab-kube-api-access-ft2gd\") pod \"migrator-59844c95c7-lclrw\" (UID: \"13b130be-039d-41d0-8c39-0137921f99ab\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.359403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j4mwm" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.363902 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" event={"ID":"2005f6fa-25f1-421c-9028-5cea529c61be","Type":"ContainerStarted","Data":"7671d1b483f9de29ec783f6494eae4ea79730b5766b334389768f56973ebf5e5"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.365799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dddp5" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.368281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" event={"ID":"20e06062-7725-4eb4-8a48-7fee4dd1340a","Type":"ContainerStarted","Data":"112f24d7fed0b3a0ef55961db4a11c43c51a13b1474cca932cbd9bb96549edff"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.393435 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.411247 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.911230355 +0000 UTC m=+163.279857723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.440921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" event={"ID":"178ad6b5-adb5-40a2-9888-52b2a8b01d66","Type":"ContainerStarted","Data":"c168b6cd493c320feba20b384b81cae1c3252bbb13bc5a23b33b04e5f2fd99c5"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.440960 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-459gs" event={"ID":"ad6d5e20-f083-4fc5-8856-234465465c02","Type":"ContainerStarted","Data":"21adb3d52cdf25e114a09b32ab5287f5dc0a98bf1eac72108f031afd4f74afbe"} Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.473352 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.496789 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.497267 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:12.997254118 +0000 UTC m=+163.365881486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.500190 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.526987 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.587505 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.595191 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.597986 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.598372 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.098352254 +0000 UTC m=+163.466979632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.611379 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.669375 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l6bdf"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.693789 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8bcqb" podStartSLOduration=142.693717483 podStartE2EDuration="2m22.693717483s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:12.674514245 +0000 UTC m=+163.043141613" watchObservedRunningTime="2026-02-14 10:44:12.693717483 +0000 UTC m=+163.062344851" Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.698796 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.699235 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.199220384 +0000 UTC m=+163.567847752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.801217 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.802159 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.30214256 +0000 UTC m=+163.670769928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.848014 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.902267 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:12 crc kubenswrapper[4736]: E0214 10:44:12.902922 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.402906937 +0000 UTC m=+163.771534305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:12 crc kubenswrapper[4736]: I0214 10:44:12.980265 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.004137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.004600 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.504568668 +0000 UTC m=+163.873196036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.035415 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.052380 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.117050 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.117462 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.617446468 +0000 UTC m=+163.986073836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.152727 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.167687 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tckpd" podStartSLOduration=143.167672267 podStartE2EDuration="2m23.167672267s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:13.158459874 +0000 UTC m=+163.527087262" watchObservedRunningTime="2026-02-14 10:44:13.167672267 +0000 UTC m=+163.536299635" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.220953 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.221215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.221554 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.721540676 +0000 UTC m=+164.090168044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.228026 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df467c01-3f4e-41c8-b5fa-b14831cfe827-metrics-certs\") pod \"network-metrics-daemon-przcz\" (UID: \"df467c01-3f4e-41c8-b5fa-b14831cfe827\") " pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.287307 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.309918 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r4f7j" podStartSLOduration=143.309901483 podStartE2EDuration="2m23.309901483s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:13.308604277 +0000 UTC m=+163.677231645" watchObservedRunningTime="2026-02-14 10:44:13.309901483 +0000 UTC m=+163.678528851" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.326390 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.328601 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.828586336 +0000 UTC m=+164.197213704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.346125 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x"] Feb 14 10:44:13 crc kubenswrapper[4736]: W0214 10:44:13.409991 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc696bd02_3fce_484b_a793_efb4e593bff6.slice/crio-dc8d30656d9a93a79db828ab360157d0edf945bd5437da1b57064ac1891cf307 WatchSource:0}: Error finding container dc8d30656d9a93a79db828ab360157d0edf945bd5437da1b57064ac1891cf307: Status 404 returned error can't find the container with id dc8d30656d9a93a79db828ab360157d0edf945bd5437da1b57064ac1891cf307 Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.428275 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.429001 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:13.928989183 +0000 UTC m=+164.297616551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: W0214 10:44:13.453979 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92ef3f92_697f_4e3a_a2c7_e5dc4d10983e.slice/crio-b71b91d76213cb8d3f2d36c635f78bb04a8ee767d274c0f25595b447bf0fcd16 WatchSource:0}: Error finding container b71b91d76213cb8d3f2d36c635f78bb04a8ee767d274c0f25595b447bf0fcd16: Status 404 returned error can't find the container with id b71b91d76213cb8d3f2d36c635f78bb04a8ee767d274c0f25595b447bf0fcd16 Feb 14 10:44:13 crc kubenswrapper[4736]: W0214 10:44:13.498346 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34882bf0_6f91_4319_be98_ff12b0bcf393.slice/crio-636c8b0191ff2ba766484878306868d652f385dbdb28a3bad11753aaee642412 WatchSource:0}: Error finding container 636c8b0191ff2ba766484878306868d652f385dbdb28a3bad11753aaee642412: Status 404 returned error can't find the container with id 636c8b0191ff2ba766484878306868d652f385dbdb28a3bad11753aaee642412 Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.522052 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-przcz" Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.540924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.541245 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.041230655 +0000 UTC m=+164.409858023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.625511 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.640075 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" event={"ID":"484eab2e-2a8d-45f4-ada7-92639c4e6bcb","Type":"ContainerStarted","Data":"a1284732ca3ab970b1f895f2b6e6a377689447a3fccbeda1e36a7c0857e603bc"} Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.642476 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.643740 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.143727699 +0000 UTC m=+164.512355067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.674682 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" event={"ID":"b3550c81-1f31-4800-b399-4168db6f20fc","Type":"ContainerStarted","Data":"b935a60592db782d12ff7dec6177c8ed701fa4a65870dc49342e87d8f69afe96"} Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.690603 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" event={"ID":"4985141b-c570-4dd3-aad8-adbf891e00e0","Type":"ContainerStarted","Data":"4c2e1b15368c2c1cf06a5dc4471554704372d1aa44c3f36cb7821985bc3e52c8"} Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.712113 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5gl7g"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.743963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.744123 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.244100195 +0000 UTC m=+164.612727563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.744323 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.744669 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.244661001 +0000 UTC m=+164.613288369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.807553 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.816843 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.821956 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nhfqg"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.823863 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.849255 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.849596 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.349581602 +0000 UTC m=+164.718208970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.865796 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" event={"ID":"c3947381-3e2c-4e4e-bb22-5e7d0494222c","Type":"ContainerStarted","Data":"f3aec034d2ff303adca2393d0001e37dcd39ed919bc30e036424364b935ed1a2"} Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.921525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" event={"ID":"b305d178-1f44-4e74-9a0f-9a6c95fb4c45","Type":"ContainerStarted","Data":"414adb68f6ee63f1813a6805f23191b200eea869cd6c617038f661eca8424374"} Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.940164 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dddp5"] Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.953281 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:13 crc kubenswrapper[4736]: E0214 10:44:13.953536 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.453525726 +0000 UTC m=+164.822153094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:13 crc kubenswrapper[4736]: I0214 10:44:13.993039 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx"] Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.007737 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2jk7h" event={"ID":"43f22db7-b110-4a1a-823b-888bb2768191","Type":"ContainerStarted","Data":"9f8bea7cfd1d8c31cc28c4f4229babc1b87c524041d14072998157ab65e69456"} Feb 14 10:44:14 crc kubenswrapper[4736]: W0214 10:44:14.023230 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb97d53_c3be_4c23_a56a_ca182e70ad0b.slice/crio-b892a500be55ca8414beb9de8f3b303b2fea572742baf3ea77414705b575a082 WatchSource:0}: Error finding container b892a500be55ca8414beb9de8f3b303b2fea572742baf3ea77414705b575a082: Status 404 returned error can't find the container with id b892a500be55ca8414beb9de8f3b303b2fea572742baf3ea77414705b575a082 Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.060612 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.061669 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.561654425 +0000 UTC m=+164.930281793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.157683 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l6bdf" event={"ID":"0dac6876-5757-41e4-88ac-a640e67b013e","Type":"ContainerStarted","Data":"be0ed7f4cc0882ba90ddf745b967fae0ee200fe46d86f10b6665f5deba2126be"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.159113 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.189999 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.190051 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.203179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.203445 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.703431978 +0000 UTC m=+165.072059346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.215738 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx"] Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.237332 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l6bdf" podStartSLOduration=144.237317269 podStartE2EDuration="2m24.237317269s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.237085292 +0000 UTC m=+164.605712660" watchObservedRunningTime="2026-02-14 10:44:14.237317269 +0000 UTC m=+164.605944637" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.245794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" event={"ID":"446f17e4-455e-45ae-affc-f27215421058","Type":"ContainerStarted","Data":"2652dc5c51f482d1e5999027234e0dc13a9182674cf3b3a97f52ca82839d22db"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.246631 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.259855 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j4mwm" event={"ID":"2bdcca22-6b12-440b-9247-b216d1d45071","Type":"ContainerStarted","Data":"9f88cc1ceea32ebc3121941d714a9572698dad08575bb0f16ebdc1c43b5301d5"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.272286 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" event={"ID":"4c385175-c749-4eeb-9d28-eedb51937337","Type":"ContainerStarted","Data":"96e6d3190062daaac60f2f85f6168ed3e233615cc0e8359efec01899a9881f06"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.304180 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.305258 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.805243394 +0000 UTC m=+165.173870762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.333498 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thpxl" podStartSLOduration=144.333483539 podStartE2EDuration="2m24.333483539s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.333122909 +0000 UTC m=+164.701750277" watchObservedRunningTime="2026-02-14 10:44:14.333483539 +0000 UTC m=+164.702110907" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.366647 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" event={"ID":"3b2caf1c-b536-4518-a8fa-966eb348bad7","Type":"ContainerStarted","Data":"a852f508a8def87b3f76ff322dab97993396adcd391895601fe9d6c4713a747c"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.381612 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" event={"ID":"bc4bed44-3d6b-4055-bb84-75071c99aef8","Type":"ContainerStarted","Data":"21787e0ff958dbccc3cf1a3466da39f23d346a0c8ee39e53f361eb2bda8e9fe4"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.393404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-459gs" event={"ID":"ad6d5e20-f083-4fc5-8856-234465465c02","Type":"ContainerStarted","Data":"1a9d462f2c4b610fed184fd4243de105b5204eb2494e855b96f702e890775692"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.408894 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.411323 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:14.911311116 +0000 UTC m=+165.279938484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.452337 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" event={"ID":"1a942552-de44-4c27-8779-4cf239de59a3","Type":"ContainerStarted","Data":"7e96b33d95a15fd0a464bb89838d6969b033a756550604825832b482cd06da27"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.462409 4736 generic.go:334] "Generic (PLEG): container finished" podID="2206c275-5448-4cfb-bdfb-25180b3c01e1" containerID="676847b90126c8c91c27c43dcc85622ee0f093d0eeaccae85a8d1d57d6eac145" exitCode=0 Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.462467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" event={"ID":"2206c275-5448-4cfb-bdfb-25180b3c01e1","Type":"ContainerDied","Data":"676847b90126c8c91c27c43dcc85622ee0f093d0eeaccae85a8d1d57d6eac145"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.467725 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" event={"ID":"c696bd02-3fce-484b-a793-efb4e593bff6","Type":"ContainerStarted","Data":"dc8d30656d9a93a79db828ab360157d0edf945bd5437da1b57064ac1891cf307"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.468393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerStarted","Data":"2ac9f8ec015628646fe4685b52327cd2c92ba3124c61d6b57eb8dda738ecc5cf"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.469335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" event={"ID":"9e900363-6bcf-4546-88e8-61fb89228809","Type":"ContainerStarted","Data":"0c22bc5b00f2f4c4fc4a9819a4c44b2f715322cc82311858635b7d0ead786715"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.477377 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-68p48" podStartSLOduration=143.47736001 podStartE2EDuration="2m23.47736001s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.417314161 +0000 UTC m=+164.785941529" watchObservedRunningTime="2026-02-14 10:44:14.47736001 +0000 UTC m=+164.845987378" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.477555 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-459gs" podStartSLOduration=144.477551795 podStartE2EDuration="2m24.477551795s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.477132574 +0000 UTC m=+164.845759962" watchObservedRunningTime="2026-02-14 10:44:14.477551795 +0000 UTC m=+164.846179163" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.482674 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" event={"ID":"21d3351d-662a-4e3e-b7fa-f7eb332a1506","Type":"ContainerStarted","Data":"958e4b9be34c9ff9976ed0b923f5791718b68a51be4727d23cc087b38021978a"} Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.482716 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.513139 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.514483 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.014462389 +0000 UTC m=+165.383089777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.526715 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.526791 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tckpd" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.606133 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" podStartSLOduration=144.606114026 podStartE2EDuration="2m24.606114026s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.546432017 +0000 UTC m=+164.915059385" watchObservedRunningTime="2026-02-14 10:44:14.606114026 +0000 UTC m=+164.974741394" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.606540 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7n6r5" podStartSLOduration=144.606530397 podStartE2EDuration="2m24.606530397s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.604900312 +0000 UTC m=+164.973527680" watchObservedRunningTime="2026-02-14 10:44:14.606530397 +0000 UTC m=+164.975157765" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.614283 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.616549 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.116533062 +0000 UTC m=+165.485160430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.646598 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j"] Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.688402 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h"] Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.694105 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-spvg2"] Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.718440 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" podStartSLOduration=143.718419158 podStartE2EDuration="2m23.718419158s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:14.710008468 +0000 UTC m=+165.078635836" watchObservedRunningTime="2026-02-14 10:44:14.718419158 +0000 UTC m=+165.087046536" Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.719422 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.719985 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.219966621 +0000 UTC m=+165.588593999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.725120 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.725409 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.22539785 +0000 UTC m=+165.594025218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.792107 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:44:14 crc kubenswrapper[4736]: W0214 10:44:14.815493 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f352c8_f830_4ffb_8cb4_8f02ab4221d1.slice/crio-895f1a4cfb6252ddec0cae76da96f5cf0dbd1c3deb757997318429910994b268 WatchSource:0}: Error finding container 895f1a4cfb6252ddec0cae76da96f5cf0dbd1c3deb757997318429910994b268: Status 404 returned error can't find the container with id 895f1a4cfb6252ddec0cae76da96f5cf0dbd1c3deb757997318429910994b268 Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.825840 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.826120 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.326104725 +0000 UTC m=+165.694732093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:14 crc kubenswrapper[4736]: W0214 10:44:14.901581 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26701b87_9d9f_4444_8f65_6e645daa1714.slice/crio-6bdf7a028687238a64c4639163a401e68529a91563c2d047d094a9a3ee1e4e3d WatchSource:0}: Error finding container 6bdf7a028687238a64c4639163a401e68529a91563c2d047d094a9a3ee1e4e3d: Status 404 returned error can't find the container with id 6bdf7a028687238a64c4639163a401e68529a91563c2d047d094a9a3ee1e4e3d Feb 14 10:44:14 crc kubenswrapper[4736]: I0214 10:44:14.932612 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:14 crc kubenswrapper[4736]: E0214 10:44:14.932880 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.432869737 +0000 UTC m=+165.801497105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.034234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.034505 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.534473937 +0000 UTC m=+165.903101305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.034624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.034654 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.034926 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.534916199 +0000 UTC m=+165.903543567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.052546 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:15 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:15 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:15 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.052588 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.140329 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.140659 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.640645032 +0000 UTC m=+166.009272400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.162603 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-przcz"] Feb 14 10:44:15 crc kubenswrapper[4736]: W0214 10:44:15.218996 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf467c01_3f4e_41c8_b5fa_b14831cfe827.slice/crio-a349bc1b46b87e105b490662cd30e2d01515c826c1718b96533ca525e5540d92 WatchSource:0}: Error finding container a349bc1b46b87e105b490662cd30e2d01515c826c1718b96533ca525e5540d92: Status 404 returned error can't find the container with id a349bc1b46b87e105b490662cd30e2d01515c826c1718b96533ca525e5540d92 Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.242066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.242423 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.742412177 +0000 UTC m=+166.111039545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.342976 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.343791 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.843716619 +0000 UTC m=+166.212343997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.444381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.444766 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:15.944736282 +0000 UTC m=+166.313363650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.537993 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" event={"ID":"4c385175-c749-4eeb-9d28-eedb51937337","Type":"ContainerStarted","Data":"42ca36530eee0e8c6a52bfb717d6f9fea2d3c60eda58d0ec764fe3c1e690fe5b"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.545924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.546190 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.046176288 +0000 UTC m=+166.414803646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.565142 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" event={"ID":"13b130be-039d-41d0-8c39-0137921f99ab","Type":"ContainerStarted","Data":"5e5e21c46c07ae271a3c82b3082e03ccb5545f974754ed2b10e962b2aa0cdca0"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.582703 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" event={"ID":"3b2caf1c-b536-4518-a8fa-966eb348bad7","Type":"ContainerStarted","Data":"13fa351407ad896e27d6c0c2335df4e1640a4a3475ae052eb06a5f6cea467232"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.589862 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" event={"ID":"763fa8c1-f41f-4dea-b69d-98133a1357d2","Type":"ContainerStarted","Data":"8454b3d8c2ff52cb066302f37bf783bab9f8637d5195955be2e3bcb3f471fca7"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.610527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" event={"ID":"51168ccc-7cf4-4efe-a67b-049d4072b5c0","Type":"ContainerStarted","Data":"9d5977804d8a3b0a6535522c7c052dec275fd564d9476393a323d6671593604b"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.653728 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.654172 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.154134082 +0000 UTC m=+166.522761540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.655158 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" event={"ID":"1a942552-de44-4c27-8779-4cf239de59a3","Type":"ContainerStarted","Data":"edd7a7993cf58d5f124b07e456b139d7364ff190fc4771825ec2d0f566119cca"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.660056 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" event={"ID":"6d5c2471-9df8-4ce4-a64b-2ef892d3af94","Type":"ContainerStarted","Data":"e36eac711573a6145e4ec4530854dd9624476c1e2061f906b06e86f00e562a7f"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.680867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" event={"ID":"53f352c8-f830-4ffb-8cb4-8f02ab4221d1","Type":"ContainerStarted","Data":"895f1a4cfb6252ddec0cae76da96f5cf0dbd1c3deb757997318429910994b268"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.682961 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l6bdf" event={"ID":"0dac6876-5757-41e4-88ac-a640e67b013e","Type":"ContainerStarted","Data":"9bafbe99b69c6d9f021b2e277fa0872170dc732780191f7b7703f480bac7eeff"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.684303 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.684356 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.689379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" event={"ID":"26701b87-9d9f-4444-8f65-6e645daa1714","Type":"ContainerStarted","Data":"6bdf7a028687238a64c4639163a401e68529a91563c2d047d094a9a3ee1e4e3d"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.691595 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" event={"ID":"2005f6fa-25f1-421c-9028-5cea529c61be","Type":"ContainerStarted","Data":"3dbac3417be64c490bfab3392f33e42a8443595a7f98d952972022e180b06840"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.700340 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" podStartSLOduration=145.70032311 podStartE2EDuration="2m25.70032311s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.695172169 +0000 UTC m=+166.063799537" watchObservedRunningTime="2026-02-14 10:44:15.70032311 +0000 UTC m=+166.068950478" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.701930 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" event={"ID":"4ddd9068-95aa-4e08-bed7-b400152e1766","Type":"ContainerStarted","Data":"b80befc9241c33661bf16260065c607bd3c5000e676ba3ec40e37c97a7efb21d"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.708558 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2jk7h" event={"ID":"43f22db7-b110-4a1a-823b-888bb2768191","Type":"ContainerStarted","Data":"f512676145773b8aa0befaf5ce7cf56761548e04e0b16311529a38e4b85ffe45"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.713701 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" event={"ID":"71913368-a56a-4e9c-b23b-e6b69f79c110","Type":"ContainerStarted","Data":"4f64aa227cae02f535fb9d14a82b4e8bec346212bd18218beee76d96a6c2ccfa"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.715799 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" event={"ID":"b3550c81-1f31-4800-b399-4168db6f20fc","Type":"ContainerStarted","Data":"439de0fdd7075f92c49783ba6a119c6fea764b653b67adab7c35af6206161ba9"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.717483 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wkxgj" podStartSLOduration=145.717472931 podStartE2EDuration="2m25.717472931s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.715348983 +0000 UTC m=+166.083976341" watchObservedRunningTime="2026-02-14 10:44:15.717472931 +0000 UTC m=+166.086100299" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.719193 4736 generic.go:334] "Generic (PLEG): container finished" podID="20e06062-7725-4eb4-8a48-7fee4dd1340a" containerID="89c92cc64767821d07c2e860152f90ef53ed2ecc2b0bb87db14a0cdfdaaba23b" exitCode=0 Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.719282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" event={"ID":"20e06062-7725-4eb4-8a48-7fee4dd1340a","Type":"ContainerDied","Data":"89c92cc64767821d07c2e860152f90ef53ed2ecc2b0bb87db14a0cdfdaaba23b"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.725328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-przcz" event={"ID":"df467c01-3f4e-41c8-b5fa-b14831cfe827","Type":"ContainerStarted","Data":"a349bc1b46b87e105b490662cd30e2d01515c826c1718b96533ca525e5540d92"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.746052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j4mwm" event={"ID":"2bdcca22-6b12-440b-9247-b216d1d45071","Type":"ContainerStarted","Data":"3e0131d8afc10b4987a4e8f7cf2485f5b10ee24935da1aac79eae4ea275c0dfe"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.760239 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.761156 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.261142181 +0000 UTC m=+166.629769549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.774311 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9jr" podStartSLOduration=144.774292202 podStartE2EDuration="2m24.774292202s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.77386487 +0000 UTC m=+166.142492238" watchObservedRunningTime="2026-02-14 10:44:15.774292202 +0000 UTC m=+166.142919570" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.848466 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" event={"ID":"547644c0-393d-4c6d-b2dc-94c587cd9bfd","Type":"ContainerStarted","Data":"9a9130766fe41665de011b42a61e6d295e6d40b41185a15f2bdea013f2135148"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.849562 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.858894 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7z8zv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.858950 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" podUID="547644c0-393d-4c6d-b2dc-94c587cd9bfd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.861239 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.862872 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.362860344 +0000 UTC m=+166.731487712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.879471 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8bxbt" podStartSLOduration=144.879455789 podStartE2EDuration="2m24.879455789s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.861385413 +0000 UTC m=+166.230012801" watchObservedRunningTime="2026-02-14 10:44:15.879455789 +0000 UTC m=+166.248083167" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.881465 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-j4mwm" podStartSLOduration=6.881445014 podStartE2EDuration="6.881445014s" podCreationTimestamp="2026-02-14 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.879172212 +0000 UTC m=+166.247799590" watchObservedRunningTime="2026-02-14 10:44:15.881445014 +0000 UTC m=+166.250072382" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.898236 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dddp5" event={"ID":"0d5941c4-bec5-44db-aa12-d89e5ef34609","Type":"ContainerStarted","Data":"9a57fe6dfb2056456c046464210bfde6dfeb9f638940847b97812e5385ab0274"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.912476 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" event={"ID":"beb97d53-c3be-4c23-a56a-ca182e70ad0b","Type":"ContainerStarted","Data":"b892a500be55ca8414beb9de8f3b303b2fea572742baf3ea77414705b575a082"} Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.946129 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" podStartSLOduration=144.94610794 podStartE2EDuration="2m24.94610794s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:15.907368406 +0000 UTC m=+166.275995774" watchObservedRunningTime="2026-02-14 10:44:15.94610794 +0000 UTC m=+166.314735308" Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.962695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:15 crc kubenswrapper[4736]: E0214 10:44:15.963110 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.463095136 +0000 UTC m=+166.831722504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:15 crc kubenswrapper[4736]: I0214 10:44:15.978182 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" event={"ID":"bc4bed44-3d6b-4055-bb84-75071c99aef8","Type":"ContainerStarted","Data":"e29bdf3d65c9f9ac9024aa7a24f9c9dbcf86597751f26dffcd3277d20ae83b72"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.015086 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dxngl" podStartSLOduration=146.015067663 podStartE2EDuration="2m26.015067663s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.008364729 +0000 UTC m=+166.376992097" watchObservedRunningTime="2026-02-14 10:44:16.015067663 +0000 UTC m=+166.383695031" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.027164 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" event={"ID":"178ad6b5-adb5-40a2-9888-52b2a8b01d66","Type":"ContainerStarted","Data":"84d73404734ec06fb27c9b66fe32dcb2fc62ef6c487d74d074bf2014a106eff9"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.042181 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" event={"ID":"4985141b-c570-4dd3-aad8-adbf891e00e0","Type":"ContainerStarted","Data":"50e4cf5d778699f79d0ecd6d164217dec068e5cf3dec8bb20eb8fda00c4c8044"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.042938 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.043985 4736 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-brfbh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.044024 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.050127 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:16 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:16 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:16 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.050197 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.050619 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nv7wg" podStartSLOduration=146.050599519 podStartE2EDuration="2m26.050599519s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.048640715 +0000 UTC m=+166.417268083" watchObservedRunningTime="2026-02-14 10:44:16.050599519 +0000 UTC m=+166.419226887" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.064013 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.065389 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.565370745 +0000 UTC m=+166.933998113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.074137 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" event={"ID":"c3947381-3e2c-4e4e-bb22-5e7d0494222c","Type":"ContainerStarted","Data":"08b4c3df46be02ea490fb22db5592cd5da3045e0cfac74897e3e69c1e763f03b"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.074180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" event={"ID":"c3947381-3e2c-4e4e-bb22-5e7d0494222c","Type":"ContainerStarted","Data":"aadeadab1bf465bb26a2af84c919ba711c37f6b4fcbb7eff22c16d2d543c796e"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.074764 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.087690 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" podStartSLOduration=146.087673787 podStartE2EDuration="2m26.087673787s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.085422245 +0000 UTC m=+166.454049613" watchObservedRunningTime="2026-02-14 10:44:16.087673787 +0000 UTC m=+166.456301155" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.105591 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" event={"ID":"fb02e73f-f113-47f6-99cd-674686f3ad56","Type":"ContainerStarted","Data":"e5b9c8924ea68a05c064097f5e592ad4e606a6ec4adc53c9e35f4d793f54eaa1"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.127587 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" event={"ID":"34882bf0-6f91-4319-be98-ff12b0bcf393","Type":"ContainerStarted","Data":"a1978031bef6717c715e43190cbe14202c50a179dee10261bc3aaf5ef3953b80"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.127628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" event={"ID":"34882bf0-6f91-4319-be98-ff12b0bcf393","Type":"ContainerStarted","Data":"636c8b0191ff2ba766484878306868d652f385dbdb28a3bad11753aaee642412"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.145284 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" event={"ID":"3c77f606-891d-4408-adc0-f27624c5de0c","Type":"ContainerStarted","Data":"d5242bd6ba3706e75e80ade4bfdbb71f272a2496a3f27d81200648b9e820b842"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.145438 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" podStartSLOduration=145.145418893 podStartE2EDuration="2m25.145418893s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.124411136 +0000 UTC m=+166.493038514" watchObservedRunningTime="2026-02-14 10:44:16.145418893 +0000 UTC m=+166.514046261" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.146318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" event={"ID":"9df4c287-aa48-47d1-86b3-156b92993310","Type":"ContainerStarted","Data":"fa68c7afaa58089572f68430e7df27e6575dd3bd8f2da06b0ac71a373349b69d"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.156245 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hq6m9" podStartSLOduration=145.156217949 podStartE2EDuration="2m25.156217949s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.15479876 +0000 UTC m=+166.523426118" watchObservedRunningTime="2026-02-14 10:44:16.156217949 +0000 UTC m=+166.524845317" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.164630 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.166293 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.666278265 +0000 UTC m=+167.034905633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.204027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" event={"ID":"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e","Type":"ContainerStarted","Data":"86ec216bd2f2e47210f398953da13e8a2e399afb6f205e785f2fbe91aad7daa3"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.204083 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" event={"ID":"92ef3f92-697f-4e3a-a2c7-e5dc4d10983e","Type":"ContainerStarted","Data":"b71b91d76213cb8d3f2d36c635f78bb04a8ee767d274c0f25595b447bf0fcd16"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.205105 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.206244 4736 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2xp2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.206277 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" podUID="92ef3f92-697f-4e3a-a2c7-e5dc4d10983e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.220934 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerStarted","Data":"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e"} Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.221637 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.231195 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" podStartSLOduration=145.231174978 podStartE2EDuration="2m25.231174978s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.225380898 +0000 UTC m=+166.594008266" watchObservedRunningTime="2026-02-14 10:44:16.231174978 +0000 UTC m=+166.599802346" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.235868 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v52bz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.235907 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.247849 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" podStartSLOduration=145.247828845 podStartE2EDuration="2m25.247828845s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:16.246499558 +0000 UTC m=+166.615126926" watchObservedRunningTime="2026-02-14 10:44:16.247828845 +0000 UTC m=+166.616456213" Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.276234 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.277218 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.777206282 +0000 UTC m=+167.145833650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.378140 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.380523 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.880499378 +0000 UTC m=+167.249126796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.479900 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.480239 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:16.980226896 +0000 UTC m=+167.348854264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.581926 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.582218 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.082188576 +0000 UTC m=+167.450815944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.582506 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.582805 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.082793093 +0000 UTC m=+167.451420461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.683244 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.683883 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.183849618 +0000 UTC m=+167.552476976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.784715 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.785024 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.285014066 +0000 UTC m=+167.653641434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.885993 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.886244 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.386217714 +0000 UTC m=+167.754845082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.886539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.886892 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.386879293 +0000 UTC m=+167.755506661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:16 crc kubenswrapper[4736]: I0214 10:44:16.987121 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:16 crc kubenswrapper[4736]: E0214 10:44:16.987574 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.487556857 +0000 UTC m=+167.856184215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.033862 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:17 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:17 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:17 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.033923 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.088689 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.089019 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.589005333 +0000 UTC m=+167.957632701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.189910 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.190465 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.690428568 +0000 UTC m=+168.059055936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.229440 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" event={"ID":"4c385175-c749-4eeb-9d28-eedb51937337","Type":"ContainerStarted","Data":"c6a3ec0fbf841e191422140632007485707a74eb3d1b126716b3c16fe864333e"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.247302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" event={"ID":"13b130be-039d-41d0-8c39-0137921f99ab","Type":"ContainerStarted","Data":"5a17a4c18a63430d41a876e13d9552f4686ce6527aeaed985b83f9c8a451a5e8"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.247349 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" event={"ID":"13b130be-039d-41d0-8c39-0137921f99ab","Type":"ContainerStarted","Data":"179f3be1800571ce070664ecb4c9c338aeb08d8db3247acf05cfabca2450ac24"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.265089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-przcz" event={"ID":"df467c01-3f4e-41c8-b5fa-b14831cfe827","Type":"ContainerStarted","Data":"bad795a435e1a3df4457878edba6423c3a11c9ecb3712045a9ee5eceec41b964"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.265159 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-przcz" event={"ID":"df467c01-3f4e-41c8-b5fa-b14831cfe827","Type":"ContainerStarted","Data":"723e2d73e3a94503c7ca5343d4683fb277b65d3103a2760c723bef8690231954"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.271791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" event={"ID":"20e06062-7725-4eb4-8a48-7fee4dd1340a","Type":"ContainerStarted","Data":"da14b6afef2cd38840ea0042109ec7536ba8fc544f9c309bb7517cb576a1106d"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.280069 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" event={"ID":"3b2caf1c-b536-4518-a8fa-966eb348bad7","Type":"ContainerStarted","Data":"53b468a97db958a7f9ce6b048b877fb1c55242d61d0ef850fd5c87c26ff1a1c2"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.290534 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" event={"ID":"763fa8c1-f41f-4dea-b69d-98133a1357d2","Type":"ContainerStarted","Data":"480fc1d464e56288ccf06444ffa6a1163c8fa0214d19177d2bfe812d83e0265e"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.290787 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" event={"ID":"763fa8c1-f41f-4dea-b69d-98133a1357d2","Type":"ContainerStarted","Data":"2a067490be9951c32e8f2e93cf6312467cbce68f74c873bee49c6cb27594bbde"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.292909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.293303 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.793288072 +0000 UTC m=+168.161915440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.294587 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xg87q" podStartSLOduration=147.294576698 podStartE2EDuration="2m27.294576698s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.271505234 +0000 UTC m=+167.640132612" watchObservedRunningTime="2026-02-14 10:44:17.294576698 +0000 UTC m=+167.663204066" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.295153 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" event={"ID":"6d5c2471-9df8-4ce4-a64b-2ef892d3af94","Type":"ContainerStarted","Data":"cc973263ecf0e028bfb32d6f21a071486f785aae5ae1e1bbff56c011b00845e3"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.295253 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" event={"ID":"6d5c2471-9df8-4ce4-a64b-2ef892d3af94","Type":"ContainerStarted","Data":"9181db6db002b2f9a67af42e6942949dec53ec0651670f1439cb54f25205137e"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.295728 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lclrw" podStartSLOduration=146.295723569 podStartE2EDuration="2m26.295723569s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.293686303 +0000 UTC m=+167.662313681" watchObservedRunningTime="2026-02-14 10:44:17.295723569 +0000 UTC m=+167.664350937" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.302193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" event={"ID":"71913368-a56a-4e9c-b23b-e6b69f79c110","Type":"ContainerStarted","Data":"e9fd336d7cb8d03df6f008a97c54049131e0b560787699c96a6c8db1ab8d8dbd"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.311066 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" event={"ID":"51168ccc-7cf4-4efe-a67b-049d4072b5c0","Type":"ContainerStarted","Data":"7e67ddece8e3e62d22be1529f4007829ecfa6e8c3c2940bda137dd4182dedc78"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.312829 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.315078 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wt64j container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.315120 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" podUID="51168ccc-7cf4-4efe-a67b-049d4072b5c0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.320726 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" event={"ID":"9df4c287-aa48-47d1-86b3-156b92993310","Type":"ContainerStarted","Data":"5b58edc0693e5225f52fd9c5ca9d195d76e8af55db77c516633f9ec8d46ea72b"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.323177 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-przcz" podStartSLOduration=147.323160433 podStartE2EDuration="2m27.323160433s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.322640208 +0000 UTC m=+167.691267576" watchObservedRunningTime="2026-02-14 10:44:17.323160433 +0000 UTC m=+167.691787801" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.331676 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" event={"ID":"547644c0-393d-4c6d-b2dc-94c587cd9bfd","Type":"ContainerStarted","Data":"121b917540c133d787d94d6d9e5cc03ab36bf691bbd592feae3d8268f8c36998"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.332965 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7z8zv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.333372 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" podUID="547644c0-393d-4c6d-b2dc-94c587cd9bfd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.347151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" event={"ID":"c696bd02-3fce-484b-a793-efb4e593bff6","Type":"ContainerStarted","Data":"17a7246b3bf718e1f5bd67da33ce12ac9eafb8d40d683f607a6c9198f7b7721d"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.351789 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qk4ch" podStartSLOduration=146.351775798 podStartE2EDuration="2m26.351775798s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.350285157 +0000 UTC m=+167.718912525" watchObservedRunningTime="2026-02-14 10:44:17.351775798 +0000 UTC m=+167.720403166" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.355870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" event={"ID":"fb02e73f-f113-47f6-99cd-674686f3ad56","Type":"ContainerStarted","Data":"4e81a5851fc87b813c6bafbbdbeef37a9de088e3fb8b438ca261f5ad8111df45"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.357530 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" event={"ID":"beb97d53-c3be-4c23-a56a-ca182e70ad0b","Type":"ContainerStarted","Data":"c12f453adf3e3588e51e9887dbe71d83a2558e97a5ca1ef5135d19a2666b79c3"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.357554 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" event={"ID":"beb97d53-c3be-4c23-a56a-ca182e70ad0b","Type":"ContainerStarted","Data":"9ee9f180e311f28a8afddc8dc4e5552e36d4adb31e6854b4df5b474a180f2809"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.368091 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" event={"ID":"53f352c8-f830-4ffb-8cb4-8f02ab4221d1","Type":"ContainerStarted","Data":"27e0728825781f66489dd0bb0bb8ab04a0d69cd1d1bc16b3fda26232279c34bf"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.373013 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" event={"ID":"26701b87-9d9f-4444-8f65-6e645daa1714","Type":"ContainerStarted","Data":"b04b8e96b18ca717d909fadaa84ec61afc359b933920746366f48f99962e8380"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.379586 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dddp5" event={"ID":"0d5941c4-bec5-44db-aa12-d89e5ef34609","Type":"ContainerStarted","Data":"bb3b5ad1afa4befeec381189702de66c7b3aa4e0f0d46c1d08c8595cb4f96bdc"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.390723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" event={"ID":"2206c275-5448-4cfb-bdfb-25180b3c01e1","Type":"ContainerStarted","Data":"e7cfcae4493db7403bfb5f99daedf6f9d42a9d46b6af88e8ba39f61ee81f4139"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.391809 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.391177 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" podStartSLOduration=147.39116802 podStartE2EDuration="2m27.39116802s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.390535503 +0000 UTC m=+167.759162871" watchObservedRunningTime="2026-02-14 10:44:17.39116802 +0000 UTC m=+167.759795388" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.393392 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.395369 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:17.895350865 +0000 UTC m=+168.263978233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.404524 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" event={"ID":"3c77f606-891d-4408-adc0-f27624c5de0c","Type":"ContainerStarted","Data":"0950062c23f8b5ead0e486da34b7b56ce3553f19dbf3ee0a3747fc4042145933"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.404563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" event={"ID":"3c77f606-891d-4408-adc0-f27624c5de0c","Type":"ContainerStarted","Data":"2ee5f0f630a8b2da0e036755b7bd42009d9705c2a76563ac03a68bab65918408"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.409183 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2jk7h" event={"ID":"43f22db7-b110-4a1a-823b-888bb2768191","Type":"ContainerStarted","Data":"31a38a51ee2499c290076973790db806f588f820451cf97c9356a5717d617ed2"} Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.409243 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.412630 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v52bz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.412679 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.415409 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.415477 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.433655 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.436641 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xp2x" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.441477 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" podStartSLOduration=146.441455431 podStartE2EDuration="2m26.441455431s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.43196189 +0000 UTC m=+167.800589258" watchObservedRunningTime="2026-02-14 10:44:17.441455431 +0000 UTC m=+167.810082799" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.481502 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" podStartSLOduration=146.4814848 podStartE2EDuration="2m26.4814848s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.478431546 +0000 UTC m=+167.847058914" watchObservedRunningTime="2026-02-14 10:44:17.4814848 +0000 UTC m=+167.850112168" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.499854 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.502120 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.002101656 +0000 UTC m=+168.370729024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.518706 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tw8qx" podStartSLOduration=147.518689482 podStartE2EDuration="2m27.518689482s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.51679908 +0000 UTC m=+167.885426448" watchObservedRunningTime="2026-02-14 10:44:17.518689482 +0000 UTC m=+167.887316850" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.607610 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.607794 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.107768108 +0000 UTC m=+168.476395466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.607990 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.608344 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.108336593 +0000 UTC m=+168.476963961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.630030 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rwdt7" podStartSLOduration=147.630014859 podStartE2EDuration="2m27.630014859s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.558839554 +0000 UTC m=+167.927466922" watchObservedRunningTime="2026-02-14 10:44:17.630014859 +0000 UTC m=+167.998642227" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.686522 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmq7j" podStartSLOduration=146.6865074 podStartE2EDuration="2m26.6865074s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.686449618 +0000 UTC m=+168.055076986" watchObservedRunningTime="2026-02-14 10:44:17.6865074 +0000 UTC m=+168.055134768" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.695276 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.695337 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.709186 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.709421 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.209390038 +0000 UTC m=+168.578017406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.709609 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.709955 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.209939643 +0000 UTC m=+168.578567011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.811065 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.811249 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.311214754 +0000 UTC m=+168.679842122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.811461 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.811719 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.311711888 +0000 UTC m=+168.680339246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.912585 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.912776 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.412739052 +0000 UTC m=+168.781366420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.913140 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:17 crc kubenswrapper[4736]: E0214 10:44:17.913581 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.413560094 +0000 UTC m=+168.782187542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.950004 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-spvg2" podStartSLOduration=146.949982755 podStartE2EDuration="2m26.949982755s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.788023737 +0000 UTC m=+168.156651105" watchObservedRunningTime="2026-02-14 10:44:17.949982755 +0000 UTC m=+168.318610133" Feb 14 10:44:17 crc kubenswrapper[4736]: I0214 10:44:17.958597 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b8g8r" podStartSLOduration=146.958577711 podStartE2EDuration="2m26.958577711s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:17.950125718 +0000 UTC m=+168.318753086" watchObservedRunningTime="2026-02-14 10:44:17.958577711 +0000 UTC m=+168.327205069" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.014042 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.014354 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.514331122 +0000 UTC m=+168.882958500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.014737 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.015037 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.515026791 +0000 UTC m=+168.883654159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.029191 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:18 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:18 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:18 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.029263 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.116208 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.116572 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.616557559 +0000 UTC m=+168.985184927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.143118 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nhfqg" podStartSLOduration=147.143102708 podStartE2EDuration="2m27.143102708s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.140213978 +0000 UTC m=+168.508841346" watchObservedRunningTime="2026-02-14 10:44:18.143102708 +0000 UTC m=+168.511730066" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.217184 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.217570 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.717555601 +0000 UTC m=+169.086182959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.318331 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.318658 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.818643637 +0000 UTC m=+169.187271005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.416878 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qr5lk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.416928 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" podUID="2206c275-5448-4cfb-bdfb-25180b3c01e1" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.419092 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.419413 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:18.919398774 +0000 UTC m=+169.288026142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.450760 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7z8zv" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.507467 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nsdsx" podStartSLOduration=147.507449461 podStartE2EDuration="2m27.507449461s" podCreationTimestamp="2026-02-14 10:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.501024315 +0000 UTC m=+168.869651683" watchObservedRunningTime="2026-02-14 10:44:18.507449461 +0000 UTC m=+168.876076829" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.507803 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2pf7h" podStartSLOduration=148.507798921 podStartE2EDuration="2m28.507798921s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.435162337 +0000 UTC m=+168.803789705" watchObservedRunningTime="2026-02-14 10:44:18.507798921 +0000 UTC m=+168.876426289" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.520209 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.520576 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.020546861 +0000 UTC m=+169.389174229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.522216 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.523574 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.023562924 +0000 UTC m=+169.392190292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.554632 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dddp5" podStartSLOduration=9.554612296 podStartE2EDuration="9.554612296s" podCreationTimestamp="2026-02-14 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.552323004 +0000 UTC m=+168.920950372" watchObservedRunningTime="2026-02-14 10:44:18.554612296 +0000 UTC m=+168.923239664" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.630640 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.630977 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.130962183 +0000 UTC m=+169.499589551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.633701 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2jk7h" podStartSLOduration=9.633689458 podStartE2EDuration="9.633689458s" podCreationTimestamp="2026-02-14 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.590544933 +0000 UTC m=+168.959172301" watchObservedRunningTime="2026-02-14 10:44:18.633689458 +0000 UTC m=+169.002316826" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.634232 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.635053 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.647045 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.690826 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.737492 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.737593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.737633 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb97l\" (UniqueName: \"kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.737666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.743114 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.243097652 +0000 UTC m=+169.611725020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.746352 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" podStartSLOduration=148.746326101 podStartE2EDuration="2m28.746326101s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:18.727179195 +0000 UTC m=+169.095806563" watchObservedRunningTime="2026-02-14 10:44:18.746326101 +0000 UTC m=+169.114953469" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.763570 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.764503 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.771944 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.806240 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.851562 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.851764 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb97l\" (UniqueName: \"kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.851789 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.851846 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.852250 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.852316 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.352303181 +0000 UTC m=+169.720930549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.852934 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.964108 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrg7\" (UniqueName: \"kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.964195 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.964236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.964262 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:18 crc kubenswrapper[4736]: E0214 10:44:18.964563 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.464552033 +0000 UTC m=+169.833179401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:18 crc kubenswrapper[4736]: I0214 10:44:18.992214 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb97l\" (UniqueName: \"kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l\") pod \"community-operators-kxsl4\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.030907 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:19 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:19 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:19 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.031281 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.066791 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.066834 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.067024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrg7\" (UniqueName: \"kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.067078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.067119 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.067404 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.567374486 +0000 UTC m=+169.936001864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.067517 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.067826 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.068501 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.150594 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrg7\" (UniqueName: \"kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7\") pod \"certified-operators-hpgts\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.168572 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.168620 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.168654 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjdq\" (UniqueName: \"kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.168683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.169003 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.668986786 +0000 UTC m=+170.037614244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.226239 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.270219 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.270390 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.7703655 +0000 UTC m=+170.138992858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.270803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.270959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.271073 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjdq\" (UniqueName: \"kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.271199 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.271311 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.271512 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.271523 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.771506752 +0000 UTC m=+170.140134120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.288311 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.310687 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.311527 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.370453 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjdq\" (UniqueName: \"kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq\") pod \"community-operators-pstv4\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.372432 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.372680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.372736 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jb48\" (UniqueName: \"kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.372783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.372780 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.372902 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.872876095 +0000 UTC m=+170.241503453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.396127 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.398825 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.399397 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.404100 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.404374 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.407751 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.419180 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wt64j container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.419226 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" podUID="51168ccc-7cf4-4efe-a67b-049d4072b5c0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.429632 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.468226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" event={"ID":"9df4c287-aa48-47d1-86b3-156b92993310","Type":"ContainerStarted","Data":"f0320682c1ccc6c88461e3ccf049ab2e9b5ffdd76a2a6d7eb13c3335e6e0018a"} Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478387 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jb48\" (UniqueName: \"kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478442 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478632 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.478671 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.481372 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.482719 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.482883 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:19.982732092 +0000 UTC m=+170.351359550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.548486 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jb48\" (UniqueName: \"kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48\") pod \"certified-operators-prk5f\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.579406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.579629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.579706 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.579789 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.079762266 +0000 UTC m=+170.448389634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.579933 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.628258 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.646634 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.682602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.683021 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.183004711 +0000 UTC m=+170.551632079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.711682 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.783491 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.783705 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.283677545 +0000 UTC m=+170.652304913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.783940 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.784297 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.284264581 +0000 UTC m=+170.652891949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.837945 4736 csr.go:261] certificate signing request csr-sjkgx is approved, waiting to be issued Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.887184 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.887545 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.387528597 +0000 UTC m=+170.756155965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.913652 4736 csr.go:257] certificate signing request csr-sjkgx is issued Feb 14 10:44:19 crc kubenswrapper[4736]: I0214 10:44:19.988456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:19 crc kubenswrapper[4736]: E0214 10:44:19.994945 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.494921666 +0000 UTC m=+170.863549034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.045832 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:20 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:20 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:20 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.045881 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.095247 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.095457 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.595431416 +0000 UTC m=+170.964058784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.095497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.096135 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.596125795 +0000 UTC m=+170.964753163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.206242 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.206512 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.706497325 +0000 UTC m=+171.075124693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.309861 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.310415 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.810402578 +0000 UTC m=+171.179029946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.382400 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qr5lk" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.411036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.411432 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:20.911416192 +0000 UTC m=+171.280043560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.468977 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wt64j container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.469049 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" podUID="51168ccc-7cf4-4efe-a67b-049d4072b5c0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.516995 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.524358 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.024337373 +0000 UTC m=+171.392964741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.530454 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.578548 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" event={"ID":"9df4c287-aa48-47d1-86b3-156b92993310","Type":"ContainerStarted","Data":"b9bff7f8e2972b3cb30e1ccf7802cf967f44b1f5e357e4f1cf437afbb0e6fac8"} Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.590146 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.591254 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.594333 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.594959 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.609824 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.610575 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.614781 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.616407 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.636599 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.636876 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.136860302 +0000 UTC m=+171.505487670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637266 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637316 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637340 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwwt\" (UniqueName: \"kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.637413 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.637703 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.137695995 +0000 UTC m=+171.506323353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.653916 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.668586 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746275 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746604 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746658 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746691 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wwwt\" (UniqueName: \"kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.746722 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.747123 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.24710516 +0000 UTC m=+171.615732528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.747549 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.747762 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.755343 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.794206 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.834892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wwwt\" (UniqueName: \"kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt\") pod \"redhat-marketplace-kqrw8\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.849593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.849897 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.349886732 +0000 UTC m=+171.718514100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.917718 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.918064 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-14 10:39:19 +0000 UTC, rotation deadline is 2027-01-02 17:42:56.443859471 +0000 UTC Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.918100 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7734h58m35.525761686s for next certificate rotation Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.933236 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.934345 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.950351 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:20 crc kubenswrapper[4736]: E0214 10:44:20.951025 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.451009019 +0000 UTC m=+171.819636387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:20 crc kubenswrapper[4736]: I0214 10:44:20.955949 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.027766 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.029168 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.036991 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:21 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:21 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:21 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.037056 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.052514 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzmxs\" (UniqueName: \"kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.052561 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.052594 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.052613 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.052910 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.552899866 +0000 UTC m=+171.921527234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: W0214 10:44:21.120012 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9c6f99_33a8_48c1_8ecf_56a4e9b4ec8e.slice/crio-e86eb71c345b0bd74ff52d4fe3bf8c4f2840d73813c761085a7491023f842ac2 WatchSource:0}: Error finding container e86eb71c345b0bd74ff52d4fe3bf8c4f2840d73813c761085a7491023f842ac2: Status 404 returned error can't find the container with id e86eb71c345b0bd74ff52d4fe3bf8c4f2840d73813c761085a7491023f842ac2 Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.121023 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.121853 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.153314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.153564 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzmxs\" (UniqueName: \"kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.153624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.153644 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.154051 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.154475 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.654456175 +0000 UTC m=+172.023083543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.154687 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.183439 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzmxs\" (UniqueName: \"kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs\") pod \"redhat-marketplace-jpp7r\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.201507 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.254513 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.256483 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.756472196 +0000 UTC m=+172.125099564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: W0214 10:44:21.263062 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f2feb07_1c8a_4c17_81a1_24f60ac3f31f.slice/crio-618b2f5e0904935178f746e6d4294337b3c59cd4e4eb69e8472d5e34999d539a WatchSource:0}: Error finding container 618b2f5e0904935178f746e6d4294337b3c59cd4e4eb69e8472d5e34999d539a: Status 404 returned error can't find the container with id 618b2f5e0904935178f746e6d4294337b3c59cd4e4eb69e8472d5e34999d539a Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.264855 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.265404 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.272983 4736 patch_prober.go:28] interesting pod/console-f9d7485db-r4f7j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.273046 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r4f7j" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.305904 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.344947 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.356120 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.356411 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.8563953 +0000 UTC m=+172.225022658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.364320 4736 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 14 10:44:21 crc kubenswrapper[4736]: W0214 10:44:21.374451 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod147a9fda_d92b_444a_a118_0085207d8f57.slice/crio-8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc WatchSource:0}: Error finding container 8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc: Status 404 returned error can't find the container with id 8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.445283 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.477795 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.478168 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:21.978151564 +0000 UTC m=+172.346778932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.578627 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.579048 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 10:44:22.079031744 +0000 UTC m=+172.447659112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.579228 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.579552 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 10:44:22.079545478 +0000 UTC m=+172.448172846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fss8" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.582310 4736 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7cf7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]log ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]etcd ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/generic-apiserver-start-informers ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/max-in-flight-filter ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 14 10:44:21 crc kubenswrapper[4736]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 14 10:44:21 crc kubenswrapper[4736]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/project.openshift.io-projectcache ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/openshift.io-startinformers ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 14 10:44:21 crc kubenswrapper[4736]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 14 10:44:21 crc kubenswrapper[4736]: livez check failed Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.582370 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" podUID="71913368-a56a-4e9c-b23b-e6b69f79c110" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.606540 4736 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-14T10:44:21.364367219Z","Handler":null,"Name":""} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.651975 4736 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.652010 4736 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.681021 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.697199 4736 generic.go:334] "Generic (PLEG): container finished" podID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerID="a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9" exitCode=0 Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.697294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerDied","Data":"a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.697322 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerStarted","Data":"676f54631570ade4ddd54824621b70a3e5a06223a547e37a2cf83a4a3718dea9"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.701304 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.709090 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerStarted","Data":"618b2f5e0904935178f746e6d4294337b3c59cd4e4eb69e8472d5e34999d539a"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.714290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 10:44:21 crc kubenswrapper[4736]: E0214 10:44:21.736533 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc387581c_aaa7_4dbb_875a_8c506635f598.slice/crio-12caeb67c220d9e384443bc85b2baf77541249902bca0f72beb63f10b5dd6d06.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9c6f99_33a8_48c1_8ecf_56a4e9b4ec8e.slice/crio-conmon-16975f948dfce7f3bcf8f4c72c497bda768f747ab58289dd05ded2162cbf837f.scope\": RecentStats: unable to find data in memory cache]" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.746836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.747830 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.760954 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.762120 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.765004 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.771031 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.779358 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.782756 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" event={"ID":"9df4c287-aa48-47d1-86b3-156b92993310","Type":"ContainerStarted","Data":"80f960b8e9d71b662132a14f5f6ae7e3eb9ede31399385e0f5a1668dd5a8cef6"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.783241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.783318 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.783374 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.783390 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqcw\" (UniqueName: \"kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.817803 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"147a9fda-d92b-444a-a118-0085207d8f57","Type":"ContainerStarted","Data":"8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.852134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerStarted","Data":"16975f948dfce7f3bcf8f4c72c497bda768f747ab58289dd05ded2162cbf837f"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.852180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerStarted","Data":"e86eb71c345b0bd74ff52d4fe3bf8c4f2840d73813c761085a7491023f842ac2"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.880419 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerStarted","Data":"ebbbeab1157075d0607cb662c9bfd1acce7b879efd9ee97de7185218cef1b4ec"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.889590 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqcw\" (UniqueName: \"kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.889758 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.889827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.890675 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.890779 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.928283 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5gl7g" podStartSLOduration=12.928269912 podStartE2EDuration="12.928269912s" podCreationTimestamp="2026-02-14 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:21.925566658 +0000 UTC m=+172.294194026" watchObservedRunningTime="2026-02-14 10:44:21.928269912 +0000 UTC m=+172.296897280" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.932759 4736 generic.go:334] "Generic (PLEG): container finished" podID="c387581c-aaa7-4dbb-875a-8c506635f598" containerID="12caeb67c220d9e384443bc85b2baf77541249902bca0f72beb63f10b5dd6d06" exitCode=0 Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.941688 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqcw\" (UniqueName: \"kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw\") pod \"redhat-operators-46g9b\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.942295 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerDied","Data":"12caeb67c220d9e384443bc85b2baf77541249902bca0f72beb63f10b5dd6d06"} Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.942350 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:44:21 crc kubenswrapper[4736]: I0214 10:44:21.942373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerStarted","Data":"58c7d078c23303e5274a114811744956a69158728dfb533871ebaed7dc139227"} Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.007178 4736 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.007220 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.035072 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.045545 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:22 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:22 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:22 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.045589 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.079063 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.079106 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.079169 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.079199 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.129149 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.135423 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.151691 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.156024 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.216716 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdh5m\" (UniqueName: \"kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.216848 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.216871 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.237780 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:44:22 crc kubenswrapper[4736]: W0214 10:44:22.269829 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33691b33_a810_4692_a71a_0a570d29c6e8.slice/crio-236e5e6e98df17f35096e2d43dd094f33175ca1a5de75ed1053cb95d010e4a08 WatchSource:0}: Error finding container 236e5e6e98df17f35096e2d43dd094f33175ca1a5de75ed1053cb95d010e4a08: Status 404 returned error can't find the container with id 236e5e6e98df17f35096e2d43dd094f33175ca1a5de75ed1053cb95d010e4a08 Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.275050 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.305949 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wt64j" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.317644 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.317694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.317775 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdh5m\" (UniqueName: \"kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.319546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.319726 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fss8\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.322121 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.369651 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdh5m\" (UniqueName: \"kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m\") pod \"redhat-operators-n9hq7\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.411211 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.520987 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.585287 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.968905 4736 generic.go:334] "Generic (PLEG): container finished" podID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerID="16975f948dfce7f3bcf8f4c72c497bda768f747ab58289dd05ded2162cbf837f" exitCode=0 Feb 14 10:44:22 crc kubenswrapper[4736]: I0214 10:44:22.969092 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerDied","Data":"16975f948dfce7f3bcf8f4c72c497bda768f747ab58289dd05ded2162cbf837f"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.004406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f3e67a-2eda-4e1e-b793-072fa4cee26e","Type":"ContainerStarted","Data":"de6d957d748344908e021aa75aef4e540367f23c9e910381bdde705b4b790860"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.004458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f3e67a-2eda-4e1e-b793-072fa4cee26e","Type":"ContainerStarted","Data":"50f8d1bd134a3f0c819fc6b5450b0c3539534fcde8fc78461134382c8cb565dd"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.018176 4736 generic.go:334] "Generic (PLEG): container finished" podID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerID="6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9" exitCode=0 Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.018250 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerDied","Data":"6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.042300 4736 generic.go:334] "Generic (PLEG): container finished" podID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerID="e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12" exitCode=0 Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.042437 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerDied","Data":"e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.046864 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:23 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:23 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:23 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.046943 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.062694 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.062678452 podStartE2EDuration="3.062678452s" podCreationTimestamp="2026-02-14 10:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:23.058186799 +0000 UTC m=+173.426814167" watchObservedRunningTime="2026-02-14 10:44:23.062678452 +0000 UTC m=+173.431305820" Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.066829 4736 generic.go:334] "Generic (PLEG): container finished" podID="33691b33-a810-4692-a71a-0a570d29c6e8" containerID="0b3afc9e11723e5860d3e01a155969fac99e616299deea91c5a1de71cc9e5b97" exitCode=0 Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.067458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerDied","Data":"0b3afc9e11723e5860d3e01a155969fac99e616299deea91c5a1de71cc9e5b97"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.067479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerStarted","Data":"236e5e6e98df17f35096e2d43dd094f33175ca1a5de75ed1053cb95d010e4a08"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.084043 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"147a9fda-d92b-444a-a118-0085207d8f57","Type":"ContainerStarted","Data":"46c88599056af6f26b28532fd209f0f16779651c2e3fa688c265d64505b4ae48"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.099169 4736 generic.go:334] "Generic (PLEG): container finished" podID="1a942552-de44-4c27-8779-4cf239de59a3" containerID="edd7a7993cf58d5f124b07e456b139d7364ff190fc4771825ec2d0f566119cca" exitCode=0 Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.099499 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" event={"ID":"1a942552-de44-4c27-8779-4cf239de59a3","Type":"ContainerDied","Data":"edd7a7993cf58d5f124b07e456b139d7364ff190fc4771825ec2d0f566119cca"} Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.112015 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-666xs" Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.173175 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.407323 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:44:23 crc kubenswrapper[4736]: I0214 10:44:23.491267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:44:23 crc kubenswrapper[4736]: W0214 10:44:23.584400 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3d771cd_3ef9_44db_8981_3e8241e36f30.slice/crio-7dd29ef4205698f712b1cbe19be0ef0e9202394e63387ca1f455fe10fd0d9a1f WatchSource:0}: Error finding container 7dd29ef4205698f712b1cbe19be0ef0e9202394e63387ca1f455fe10fd0d9a1f: Status 404 returned error can't find the container with id 7dd29ef4205698f712b1cbe19be0ef0e9202394e63387ca1f455fe10fd0d9a1f Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.033817 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:24 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:24 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:24 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.034170 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.070974 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2jk7h" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.111247 4736 generic.go:334] "Generic (PLEG): container finished" podID="e4f3e67a-2eda-4e1e-b793-072fa4cee26e" containerID="de6d957d748344908e021aa75aef4e540367f23c9e910381bdde705b4b790860" exitCode=0 Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.111441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f3e67a-2eda-4e1e-b793-072fa4cee26e","Type":"ContainerDied","Data":"de6d957d748344908e021aa75aef4e540367f23c9e910381bdde705b4b790860"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.139188 4736 generic.go:334] "Generic (PLEG): container finished" podID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerID="61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920" exitCode=0 Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.139286 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerDied","Data":"61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.139347 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerStarted","Data":"3f57b9a7f0bf72b3457e8476a1210d2e4b3e620d938bf4edbf3f6a4a4954293d"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.161231 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" event={"ID":"7b9a5589-a45e-4203-aea7-266e2dfa5088","Type":"ContainerStarted","Data":"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.161295 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" event={"ID":"7b9a5589-a45e-4203-aea7-266e2dfa5088","Type":"ContainerStarted","Data":"152cb58cfebb0d60395a65fa5b3b6b2fa3afc86cd4e3b45a1b8cd382b01a07be"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.162793 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.166454 4736 generic.go:334] "Generic (PLEG): container finished" podID="147a9fda-d92b-444a-a118-0085207d8f57" containerID="46c88599056af6f26b28532fd209f0f16779651c2e3fa688c265d64505b4ae48" exitCode=0 Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.166696 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"147a9fda-d92b-444a-a118-0085207d8f57","Type":"ContainerDied","Data":"46c88599056af6f26b28532fd209f0f16779651c2e3fa688c265d64505b4ae48"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.182008 4736 generic.go:334] "Generic (PLEG): container finished" podID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerID="e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543" exitCode=0 Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.183097 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerDied","Data":"e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.183128 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerStarted","Data":"7dd29ef4205698f712b1cbe19be0ef0e9202394e63387ca1f455fe10fd0d9a1f"} Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.183903 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" podStartSLOduration=154.183886079 podStartE2EDuration="2m34.183886079s" podCreationTimestamp="2026-02-14 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:44:24.180020803 +0000 UTC m=+174.548648191" watchObservedRunningTime="2026-02-14 10:44:24.183886079 +0000 UTC m=+174.552513447" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.460304 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.578701 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access\") pod \"147a9fda-d92b-444a-a118-0085207d8f57\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.578831 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir\") pod \"147a9fda-d92b-444a-a118-0085207d8f57\" (UID: \"147a9fda-d92b-444a-a118-0085207d8f57\") " Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.578914 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "147a9fda-d92b-444a-a118-0085207d8f57" (UID: "147a9fda-d92b-444a-a118-0085207d8f57"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.579355 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147a9fda-d92b-444a-a118-0085207d8f57-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.584721 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "147a9fda-d92b-444a-a118-0085207d8f57" (UID: "147a9fda-d92b-444a-a118-0085207d8f57"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.598325 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.681519 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147a9fda-d92b-444a-a118-0085207d8f57-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.782289 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume\") pod \"1a942552-de44-4c27-8779-4cf239de59a3\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.782935 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume\") pod \"1a942552-de44-4c27-8779-4cf239de59a3\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.782964 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glxct\" (UniqueName: \"kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct\") pod \"1a942552-de44-4c27-8779-4cf239de59a3\" (UID: \"1a942552-de44-4c27-8779-4cf239de59a3\") " Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.787763 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct" (OuterVolumeSpecName: "kube-api-access-glxct") pod "1a942552-de44-4c27-8779-4cf239de59a3" (UID: "1a942552-de44-4c27-8779-4cf239de59a3"). InnerVolumeSpecName "kube-api-access-glxct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.788774 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a942552-de44-4c27-8779-4cf239de59a3" (UID: "1a942552-de44-4c27-8779-4cf239de59a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.805797 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a942552-de44-4c27-8779-4cf239de59a3" (UID: "1a942552-de44-4c27-8779-4cf239de59a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.884637 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a942552-de44-4c27-8779-4cf239de59a3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.884675 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a942552-de44-4c27-8779-4cf239de59a3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:24 crc kubenswrapper[4736]: I0214 10:44:24.884684 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glxct\" (UniqueName: \"kubernetes.io/projected/1a942552-de44-4c27-8779-4cf239de59a3-kube-api-access-glxct\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.029766 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:25 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:25 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:25 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.029947 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.249552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"147a9fda-d92b-444a-a118-0085207d8f57","Type":"ContainerDied","Data":"8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc"} Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.249594 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e06b9a13269d376bb011ac76d9922fadf003ec3ae39f24610c21d77ae0e96fc" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.249674 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.265210 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" event={"ID":"1a942552-de44-4c27-8779-4cf239de59a3","Type":"ContainerDied","Data":"7e96b33d95a15fd0a464bb89838d6969b033a756550604825832b482cd06da27"} Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.265244 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e96b33d95a15fd0a464bb89838d6969b033a756550604825832b482cd06da27" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.265247 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.617877 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.703406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access\") pod \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.703470 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir\") pod \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\" (UID: \"e4f3e67a-2eda-4e1e-b793-072fa4cee26e\") " Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.703726 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4f3e67a-2eda-4e1e-b793-072fa4cee26e" (UID: "e4f3e67a-2eda-4e1e-b793-072fa4cee26e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.716099 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4f3e67a-2eda-4e1e-b793-072fa4cee26e" (UID: "e4f3e67a-2eda-4e1e-b793-072fa4cee26e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.804880 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:25 crc kubenswrapper[4736]: I0214 10:44:25.804912 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f3e67a-2eda-4e1e-b793-072fa4cee26e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.028912 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:26 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:26 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:26 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.028973 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.130145 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.138114 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z7cf7" Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.313280 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f3e67a-2eda-4e1e-b793-072fa4cee26e","Type":"ContainerDied","Data":"50f8d1bd134a3f0c819fc6b5450b0c3539534fcde8fc78461134382c8cb565dd"} Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.313329 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f8d1bd134a3f0c819fc6b5450b0c3539534fcde8fc78461134382c8cb565dd" Feb 14 10:44:26 crc kubenswrapper[4736]: I0214 10:44:26.313396 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 10:44:27 crc kubenswrapper[4736]: I0214 10:44:27.032374 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:27 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:27 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:27 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:27 crc kubenswrapper[4736]: I0214 10:44:27.032433 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:28 crc kubenswrapper[4736]: I0214 10:44:28.028788 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:28 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:28 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:28 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:28 crc kubenswrapper[4736]: I0214 10:44:28.029198 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:29 crc kubenswrapper[4736]: I0214 10:44:29.028792 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:29 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:29 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:29 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:29 crc kubenswrapper[4736]: I0214 10:44:29.028870 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:30 crc kubenswrapper[4736]: I0214 10:44:30.028217 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:30 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:30 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:30 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:30 crc kubenswrapper[4736]: I0214 10:44:30.028267 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:31 crc kubenswrapper[4736]: I0214 10:44:31.029934 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:31 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:31 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:31 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:31 crc kubenswrapper[4736]: I0214 10:44:31.030200 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:31 crc kubenswrapper[4736]: I0214 10:44:31.264588 4736 patch_prober.go:28] interesting pod/console-f9d7485db-r4f7j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 10:44:31 crc kubenswrapper[4736]: I0214 10:44:31.264642 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r4f7j" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.029603 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:32 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:32 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:32 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.029677 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.065468 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.065524 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.065816 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6bdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 14 10:44:32 crc kubenswrapper[4736]: I0214 10:44:32.065874 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6bdf" podUID="0dac6876-5757-41e4-88ac-a640e67b013e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 14 10:44:33 crc kubenswrapper[4736]: I0214 10:44:33.028027 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:33 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:33 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:33 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:33 crc kubenswrapper[4736]: I0214 10:44:33.028069 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:34 crc kubenswrapper[4736]: I0214 10:44:34.028223 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:34 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:34 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:34 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:34 crc kubenswrapper[4736]: I0214 10:44:34.028294 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.029827 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:35 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:35 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:35 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.029889 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.361104 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.361298 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" containerID="cri-o://50e4cf5d778699f79d0ecd6d164217dec068e5cf3dec8bb20eb8fda00c4c8044" gracePeriod=30 Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.374459 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:35 crc kubenswrapper[4736]: I0214 10:44:35.374655 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerName="route-controller-manager" containerID="cri-o://958e4b9be34c9ff9976ed0b923f5791718b68a51be4727d23cc087b38021978a" gracePeriod=30 Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.043895 4736 patch_prober.go:28] interesting pod/router-default-5444994796-459gs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 10:44:36 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Feb 14 10:44:36 crc kubenswrapper[4736]: [+]process-running ok Feb 14 10:44:36 crc kubenswrapper[4736]: healthz check failed Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.045374 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-459gs" podUID="ad6d5e20-f083-4fc5-8856-234465465c02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.391872 4736 generic.go:334] "Generic (PLEG): container finished" podID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerID="50e4cf5d778699f79d0ecd6d164217dec068e5cf3dec8bb20eb8fda00c4c8044" exitCode=0 Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.391933 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" event={"ID":"4985141b-c570-4dd3-aad8-adbf891e00e0","Type":"ContainerDied","Data":"50e4cf5d778699f79d0ecd6d164217dec068e5cf3dec8bb20eb8fda00c4c8044"} Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.393826 4736 generic.go:334] "Generic (PLEG): container finished" podID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerID="958e4b9be34c9ff9976ed0b923f5791718b68a51be4727d23cc087b38021978a" exitCode=0 Feb 14 10:44:36 crc kubenswrapper[4736]: I0214 10:44:36.393857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" event={"ID":"21d3351d-662a-4e3e-b7fa-f7eb332a1506","Type":"ContainerDied","Data":"958e4b9be34c9ff9976ed0b923f5791718b68a51be4727d23cc087b38021978a"} Feb 14 10:44:37 crc kubenswrapper[4736]: I0214 10:44:37.028024 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:37 crc kubenswrapper[4736]: I0214 10:44:37.030645 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-459gs" Feb 14 10:44:41 crc kubenswrapper[4736]: I0214 10:44:41.278444 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:41 crc kubenswrapper[4736]: I0214 10:44:41.282479 4736 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rd5vr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 14 10:44:41 crc kubenswrapper[4736]: I0214 10:44:41.282561 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 14 10:44:41 crc kubenswrapper[4736]: I0214 10:44:41.284848 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:44:42 crc kubenswrapper[4736]: I0214 10:44:42.071479 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l6bdf" Feb 14 10:44:42 crc kubenswrapper[4736]: I0214 10:44:42.590977 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:44:43 crc kubenswrapper[4736]: I0214 10:44:43.072475 4736 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-brfbh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 10:44:43 crc kubenswrapper[4736]: I0214 10:44:43.072850 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.529496 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.534042 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.573152 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:44:45 crc kubenswrapper[4736]: E0214 10:44:45.573723 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147a9fda-d92b-444a-a118-0085207d8f57" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.573932 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="147a9fda-d92b-444a-a118-0085207d8f57" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: E0214 10:44:45.574060 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.574147 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: E0214 10:44:45.574229 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerName="route-controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.574313 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerName="route-controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: E0214 10:44:45.574456 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f3e67a-2eda-4e1e-b793-072fa4cee26e" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.574574 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f3e67a-2eda-4e1e-b793-072fa4cee26e" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: E0214 10:44:45.574702 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a942552-de44-4c27-8779-4cf239de59a3" containerName="collect-profiles" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.574827 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a942552-de44-4c27-8779-4cf239de59a3" containerName="collect-profiles" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.575210 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" containerName="route-controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.575329 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f3e67a-2eda-4e1e-b793-072fa4cee26e" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.575496 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a942552-de44-4c27-8779-4cf239de59a3" containerName="collect-profiles" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.575895 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" containerName="controller-manager" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.576004 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="147a9fda-d92b-444a-a118-0085207d8f57" containerName="pruner" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.576506 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.579407 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.605056 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb947\" (UniqueName: \"kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947\") pod \"4985141b-c570-4dd3-aad8-adbf891e00e0\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.605335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca\") pod \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.605475 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert\") pod \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.605611 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert\") pod \"4985141b-c570-4dd3-aad8-adbf891e00e0\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.606157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdnm2\" (UniqueName: \"kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2\") pod \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.606287 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles\") pod \"4985141b-c570-4dd3-aad8-adbf891e00e0\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.606426 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config\") pod \"4985141b-c570-4dd3-aad8-adbf891e00e0\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.606603 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config\") pod \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\" (UID: \"21d3351d-662a-4e3e-b7fa-f7eb332a1506\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.606945 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca\") pod \"4985141b-c570-4dd3-aad8-adbf891e00e0\" (UID: \"4985141b-c570-4dd3-aad8-adbf891e00e0\") " Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.607158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.607273 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.607397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.607596 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.607754 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmw46\" (UniqueName: \"kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.608282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4985141b-c570-4dd3-aad8-adbf891e00e0" (UID: "4985141b-c570-4dd3-aad8-adbf891e00e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.608399 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "4985141b-c570-4dd3-aad8-adbf891e00e0" (UID: "4985141b-c570-4dd3-aad8-adbf891e00e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.608986 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config" (OuterVolumeSpecName: "config") pod "4985141b-c570-4dd3-aad8-adbf891e00e0" (UID: "4985141b-c570-4dd3-aad8-adbf891e00e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.609088 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config" (OuterVolumeSpecName: "config") pod "21d3351d-662a-4e3e-b7fa-f7eb332a1506" (UID: "21d3351d-662a-4e3e-b7fa-f7eb332a1506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.609569 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca" (OuterVolumeSpecName: "client-ca") pod "21d3351d-662a-4e3e-b7fa-f7eb332a1506" (UID: "21d3351d-662a-4e3e-b7fa-f7eb332a1506"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.614769 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947" (OuterVolumeSpecName: "kube-api-access-tb947") pod "4985141b-c570-4dd3-aad8-adbf891e00e0" (UID: "4985141b-c570-4dd3-aad8-adbf891e00e0"). InnerVolumeSpecName "kube-api-access-tb947". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.615423 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2" (OuterVolumeSpecName: "kube-api-access-rdnm2") pod "21d3351d-662a-4e3e-b7fa-f7eb332a1506" (UID: "21d3351d-662a-4e3e-b7fa-f7eb332a1506"). InnerVolumeSpecName "kube-api-access-rdnm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.620904 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21d3351d-662a-4e3e-b7fa-f7eb332a1506" (UID: "21d3351d-662a-4e3e-b7fa-f7eb332a1506"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.621888 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4985141b-c570-4dd3-aad8-adbf891e00e0" (UID: "4985141b-c570-4dd3-aad8-adbf891e00e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709282 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709364 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709424 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmw46\" (UniqueName: \"kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709502 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709512 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709520 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709532 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4985141b-c570-4dd3-aad8-adbf891e00e0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709542 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb947\" (UniqueName: \"kubernetes.io/projected/4985141b-c570-4dd3-aad8-adbf891e00e0-kube-api-access-tb947\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709553 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d3351d-662a-4e3e-b7fa-f7eb332a1506-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709563 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d3351d-662a-4e3e-b7fa-f7eb332a1506-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709572 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4985141b-c570-4dd3-aad8-adbf891e00e0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.709582 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdnm2\" (UniqueName: \"kubernetes.io/projected/21d3351d-662a-4e3e-b7fa-f7eb332a1506-kube-api-access-rdnm2\") on node \"crc\" DevicePath \"\"" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.710513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.710890 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.711039 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.721832 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.726334 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmw46\" (UniqueName: \"kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46\") pod \"controller-manager-5d5ff8f8c5-px5qk\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:45 crc kubenswrapper[4736]: I0214 10:44:45.904606 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.478144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" event={"ID":"4985141b-c570-4dd3-aad8-adbf891e00e0","Type":"ContainerDied","Data":"4c2e1b15368c2c1cf06a5dc4471554704372d1aa44c3f36cb7821985bc3e52c8"} Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.478229 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-brfbh" Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.478517 4736 scope.go:117] "RemoveContainer" containerID="50e4cf5d778699f79d0ecd6d164217dec068e5cf3dec8bb20eb8fda00c4c8044" Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.482903 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" event={"ID":"21d3351d-662a-4e3e-b7fa-f7eb332a1506","Type":"ContainerDied","Data":"8f10675700b070c45265bd55babd87e783c9feae06d73173f691b96abb82ba31"} Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.482987 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr" Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.518253 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.525601 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-brfbh"] Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.530620 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:46 crc kubenswrapper[4736]: I0214 10:44:46.533835 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd5vr"] Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.695183 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.695243 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.772992 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.773625 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.777121 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.777473 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.777698 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.778166 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.778264 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.782527 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.786128 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.836268 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.836426 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.836474 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msb57\" (UniqueName: \"kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.836499 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.937203 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msb57\" (UniqueName: \"kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.937251 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.937348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.937390 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.939045 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.940163 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.950265 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:47 crc kubenswrapper[4736]: I0214 10:44:47.956489 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msb57\" (UniqueName: \"kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57\") pod \"route-controller-manager-6fb85f7c48-76prn\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:48 crc kubenswrapper[4736]: I0214 10:44:48.104015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:44:48 crc kubenswrapper[4736]: I0214 10:44:48.414238 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d3351d-662a-4e3e-b7fa-f7eb332a1506" path="/var/lib/kubelet/pods/21d3351d-662a-4e3e-b7fa-f7eb332a1506/volumes" Feb 14 10:44:48 crc kubenswrapper[4736]: I0214 10:44:48.415401 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4985141b-c570-4dd3-aad8-adbf891e00e0" path="/var/lib/kubelet/pods/4985141b-c570-4dd3-aad8-adbf891e00e0/volumes" Feb 14 10:44:52 crc kubenswrapper[4736]: I0214 10:44:52.248485 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f6vrk" Feb 14 10:44:55 crc kubenswrapper[4736]: I0214 10:44:55.312028 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:44:55 crc kubenswrapper[4736]: I0214 10:44:55.401439 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:44:56 crc kubenswrapper[4736]: E0214 10:44:56.450029 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 10:44:56 crc kubenswrapper[4736]: E0214 10:44:56.450216 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbrg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hpgts_openshift-marketplace(36c96a86-aadc-46d0-bca7-3d9fcca42ec3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:44:56 crc kubenswrapper[4736]: E0214 10:44:56.451580 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hpgts" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.366453 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hpgts" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.423837 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.424203 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wwwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kqrw8_openshift-marketplace(b71b0996-cb92-4faa-9245-95f7e9afb7fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.426673 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kqrw8" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.438851 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.439001 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb97l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kxsl4_openshift-marketplace(3f2feb07-1c8a-4c17-81a1-24f60ac3f31f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:44:57 crc kubenswrapper[4736]: E0214 10:44:57.440116 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kxsl4" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.136438 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657"] Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.137471 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.139999 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.140412 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.142822 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657"] Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.213438 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zft\" (UniqueName: \"kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.213519 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.213586 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.227717 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.228608 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.233438 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.234225 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.237727 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.314630 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.314685 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.314728 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.314778 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.314819 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9zft\" (UniqueName: \"kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.316220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.321085 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.329931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9zft\" (UniqueName: \"kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft\") pod \"collect-profiles-29517765-pc657\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.415670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.415730 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.415892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.438648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.458251 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:00 crc kubenswrapper[4736]: I0214 10:45:00.553631 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.524702 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kxsl4" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.524763 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kqrw8" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" Feb 14 10:45:01 crc kubenswrapper[4736]: I0214 10:45:01.537424 4736 scope.go:117] "RemoveContainer" containerID="958e4b9be34c9ff9976ed0b923f5791718b68a51be4727d23cc087b38021978a" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.642561 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.645881 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbjdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pstv4_openshift-marketplace(c387581c-aaa7-4dbb-875a-8c506635f598): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.647676 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pstv4" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" Feb 14 10:45:01 crc kubenswrapper[4736]: I0214 10:45:01.864884 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:45:01 crc kubenswrapper[4736]: W0214 10:45:01.883056 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d35336_be47_4e58_9123_5f9f70a64363.slice/crio-9d4ab8367664fffe9a67e5dda1152fc795e5c375d216112351a6ffcab9f48a19 WatchSource:0}: Error finding container 9d4ab8367664fffe9a67e5dda1152fc795e5c375d216112351a6ffcab9f48a19: Status 404 returned error can't find the container with id 9d4ab8367664fffe9a67e5dda1152fc795e5c375d216112351a6ffcab9f48a19 Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.886893 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.887029 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdh5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n9hq7_openshift-marketplace(2ea41fdf-923c-4ec9-b482-a53e54045056): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:45:01 crc kubenswrapper[4736]: E0214 10:45:01.888800 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n9hq7" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.114055 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 10:45:02 crc kubenswrapper[4736]: W0214 10:45:02.121948 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0426dcf0_a494_41fc_9029_0779308eabe6.slice/crio-0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9 WatchSource:0}: Error finding container 0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9: Status 404 returned error can't find the container with id 0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9 Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.162524 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.168134 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657"] Feb 14 10:45:02 crc kubenswrapper[4736]: W0214 10:45:02.175141 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d66609c_1c17_4cc0_9b1e_15b5ed50cc7c.slice/crio-6e8d865ad80c15da4b1648660b5180ccfd2ab594177fef943d71fcfb9d954646 WatchSource:0}: Error finding container 6e8d865ad80c15da4b1648660b5180ccfd2ab594177fef943d71fcfb9d954646: Status 404 returned error can't find the container with id 6e8d865ad80c15da4b1648660b5180ccfd2ab594177fef943d71fcfb9d954646 Feb 14 10:45:02 crc kubenswrapper[4736]: W0214 10:45:02.185645 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cadeff5_99fa_4350_ab0b_fafde7e713a1.slice/crio-683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789 WatchSource:0}: Error finding container 683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789: Status 404 returned error can't find the container with id 683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789 Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.539433 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.539572 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khqcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-46g9b_openshift-marketplace(d3d771cd-3ef9-44db-8981-3e8241e36f30): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.540867 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-46g9b" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.584178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" event={"ID":"4cadeff5-99fa-4350-ab0b-fafde7e713a1","Type":"ContainerStarted","Data":"95a8a0afd3edfd803943af2c83cbf15349fb0631a93de2b614310bfcadd17ecd"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.584540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" event={"ID":"4cadeff5-99fa-4350-ab0b-fafde7e713a1","Type":"ContainerStarted","Data":"683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.589155 4736 generic.go:334] "Generic (PLEG): container finished" podID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerID="a2fd5103992dfb2bfb8a40fc6cdd1f95554d2532e1ca551ae87432aed11e4cc1" exitCode=0 Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.589940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerDied","Data":"a2fd5103992dfb2bfb8a40fc6cdd1f95554d2532e1ca551ae87432aed11e4cc1"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.594123 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" event={"ID":"76d35336-be47-4e58-9123-5f9f70a64363","Type":"ContainerStarted","Data":"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.594151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" event={"ID":"76d35336-be47-4e58-9123-5f9f70a64363","Type":"ContainerStarted","Data":"9d4ab8367664fffe9a67e5dda1152fc795e5c375d216112351a6ffcab9f48a19"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.594241 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" podUID="76d35336-be47-4e58-9123-5f9f70a64363" containerName="route-controller-manager" containerID="cri-o://1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704" gracePeriod=30 Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.594913 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.602296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" event={"ID":"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c","Type":"ContainerStarted","Data":"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.602344 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" event={"ID":"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c","Type":"ContainerStarted","Data":"6e8d865ad80c15da4b1648660b5180ccfd2ab594177fef943d71fcfb9d954646"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.603917 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0426dcf0-a494-41fc-9029-0779308eabe6","Type":"ContainerStarted","Data":"0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9"} Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.605313 4736 generic.go:334] "Generic (PLEG): container finished" podID="33691b33-a810-4692-a71a-0a570d29c6e8" containerID="40b23516961095a7c6ca32e21cbd18bb6e57f2041b9e5d94f8a6727aa224dbce" exitCode=0 Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.605423 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerDied","Data":"40b23516961095a7c6ca32e21cbd18bb6e57f2041b9e5d94f8a6727aa224dbce"} Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.607784 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-46g9b" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.607948 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n9hq7" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" Feb 14 10:45:02 crc kubenswrapper[4736]: E0214 10:45:02.608940 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pstv4" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.728119 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" podStartSLOduration=27.728098264 podStartE2EDuration="27.728098264s" podCreationTimestamp="2026-02-14 10:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:02.725367585 +0000 UTC m=+213.093994963" watchObservedRunningTime="2026-02-14 10:45:02.728098264 +0000 UTC m=+213.096725642" Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.732428 4736 patch_prober.go:28] interesting pod/route-controller-manager-6fb85f7c48-76prn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:34542->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 14 10:45:02 crc kubenswrapper[4736]: I0214 10:45:02.732474 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" podUID="76d35336-be47-4e58-9123-5f9f70a64363" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:34542->10.217.0.55:8443: read: connection reset by peer" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.090957 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6fb85f7c48-76prn_76d35336-be47-4e58-9123-5f9f70a64363/route-controller-manager/0.log" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.091069 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.125451 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:03 crc kubenswrapper[4736]: E0214 10:45:03.135665 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d35336-be47-4e58-9123-5f9f70a64363" containerName="route-controller-manager" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.135943 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d35336-be47-4e58-9123-5f9f70a64363" containerName="route-controller-manager" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.136448 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d35336-be47-4e58-9123-5f9f70a64363" containerName="route-controller-manager" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.137016 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.137177 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266366 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca\") pod \"76d35336-be47-4e58-9123-5f9f70a64363\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266480 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert\") pod \"76d35336-be47-4e58-9123-5f9f70a64363\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266502 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config\") pod \"76d35336-be47-4e58-9123-5f9f70a64363\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266570 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msb57\" (UniqueName: \"kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57\") pod \"76d35336-be47-4e58-9123-5f9f70a64363\" (UID: \"76d35336-be47-4e58-9123-5f9f70a64363\") " Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266859 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266909 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplwq\" (UniqueName: \"kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.266945 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.267014 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.267236 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca" (OuterVolumeSpecName: "client-ca") pod "76d35336-be47-4e58-9123-5f9f70a64363" (UID: "76d35336-be47-4e58-9123-5f9f70a64363"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.267826 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config" (OuterVolumeSpecName: "config") pod "76d35336-be47-4e58-9123-5f9f70a64363" (UID: "76d35336-be47-4e58-9123-5f9f70a64363"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.273639 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76d35336-be47-4e58-9123-5f9f70a64363" (UID: "76d35336-be47-4e58-9123-5f9f70a64363"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.274306 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57" (OuterVolumeSpecName: "kube-api-access-msb57") pod "76d35336-be47-4e58-9123-5f9f70a64363" (UID: "76d35336-be47-4e58-9123-5f9f70a64363"). InnerVolumeSpecName "kube-api-access-msb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369114 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369263 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369313 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplwq\" (UniqueName: \"kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369394 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369407 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d35336-be47-4e58-9123-5f9f70a64363-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369417 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d35336-be47-4e58-9123-5f9f70a64363-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.369428 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msb57\" (UniqueName: \"kubernetes.io/projected/76d35336-be47-4e58-9123-5f9f70a64363-kube-api-access-msb57\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.371073 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.372608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.378579 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.390702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplwq\" (UniqueName: \"kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq\") pod \"route-controller-manager-bcffbfbdf-qh79v\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.452037 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.612634 4736 generic.go:334] "Generic (PLEG): container finished" podID="4cadeff5-99fa-4350-ab0b-fafde7e713a1" containerID="95a8a0afd3edfd803943af2c83cbf15349fb0631a93de2b614310bfcadd17ecd" exitCode=0 Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.612723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" event={"ID":"4cadeff5-99fa-4350-ab0b-fafde7e713a1","Type":"ContainerDied","Data":"95a8a0afd3edfd803943af2c83cbf15349fb0631a93de2b614310bfcadd17ecd"} Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615097 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6fb85f7c48-76prn_76d35336-be47-4e58-9123-5f9f70a64363/route-controller-manager/0.log" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615157 4736 generic.go:334] "Generic (PLEG): container finished" podID="76d35336-be47-4e58-9123-5f9f70a64363" containerID="1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704" exitCode=255 Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615202 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615222 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" event={"ID":"76d35336-be47-4e58-9123-5f9f70a64363","Type":"ContainerDied","Data":"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704"} Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn" event={"ID":"76d35336-be47-4e58-9123-5f9f70a64363","Type":"ContainerDied","Data":"9d4ab8367664fffe9a67e5dda1152fc795e5c375d216112351a6ffcab9f48a19"} Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.615289 4736 scope.go:117] "RemoveContainer" containerID="1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.617330 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0426dcf0-a494-41fc-9029-0779308eabe6","Type":"ContainerStarted","Data":"b16023f1c6f682640cee86bffb4f8c09af61b1e9d871e5a2793e7d88f01d2f73"} Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.617394 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" podUID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" containerName="controller-manager" containerID="cri-o://76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b" gracePeriod=30 Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.617762 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.630316 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.645931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.649836 4736 scope.go:117] "RemoveContainer" containerID="1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704" Feb 14 10:45:03 crc kubenswrapper[4736]: E0214 10:45:03.650527 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704\": container with ID starting with 1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704 not found: ID does not exist" containerID="1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.650565 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704"} err="failed to get container status \"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704\": rpc error: code = NotFound desc = could not find container \"1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704\": container with ID starting with 1813d747880dbcd034081e1eecbc8cabbc2751b050080e71c3f1023d8e381704 not found: ID does not exist" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.672407 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" podStartSLOduration=28.672384764 podStartE2EDuration="28.672384764s" podCreationTimestamp="2026-02-14 10:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:03.652150786 +0000 UTC m=+214.020778154" watchObservedRunningTime="2026-02-14 10:45:03.672384764 +0000 UTC m=+214.041012142" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.681530 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.681509919 podStartE2EDuration="3.681509919s" podCreationTimestamp="2026-02-14 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:03.672305162 +0000 UTC m=+214.040932530" watchObservedRunningTime="2026-02-14 10:45:03.681509919 +0000 UTC m=+214.050137287" Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.692318 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:45:03 crc kubenswrapper[4736]: I0214 10:45:03.699236 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb85f7c48-76prn"] Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.408162 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d35336-be47-4e58-9123-5f9f70a64363" path="/var/lib/kubelet/pods/76d35336-be47-4e58-9123-5f9f70a64363/volumes" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.551310 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.584520 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert\") pod \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.586227 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles\") pod \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.586278 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config\") pod \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.586332 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca\") pod \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.586584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmw46\" (UniqueName: \"kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46\") pod \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\" (UID: \"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c\") " Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.588678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config" (OuterVolumeSpecName: "config") pod "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" (UID: "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.589019 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" (UID: "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.589205 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" (UID: "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.599519 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" (UID: "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.627274 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46" (OuterVolumeSpecName: "kube-api-access-rmw46") pod "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" (UID: "3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c"). InnerVolumeSpecName "kube-api-access-rmw46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.641014 4736 generic.go:334] "Generic (PLEG): container finished" podID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" containerID="76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b" exitCode=0 Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.641138 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" event={"ID":"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c","Type":"ContainerDied","Data":"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b"} Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.641168 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" event={"ID":"3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c","Type":"ContainerDied","Data":"6e8d865ad80c15da4b1648660b5180ccfd2ab594177fef943d71fcfb9d954646"} Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.641189 4736 scope.go:117] "RemoveContainer" containerID="76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.641208 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.648150 4736 generic.go:334] "Generic (PLEG): container finished" podID="0426dcf0-a494-41fc-9029-0779308eabe6" containerID="b16023f1c6f682640cee86bffb4f8c09af61b1e9d871e5a2793e7d88f01d2f73" exitCode=0 Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.648260 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0426dcf0-a494-41fc-9029-0779308eabe6","Type":"ContainerDied","Data":"b16023f1c6f682640cee86bffb4f8c09af61b1e9d871e5a2793e7d88f01d2f73"} Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.653083 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" event={"ID":"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3","Type":"ContainerStarted","Data":"1c81c27198ec42ea97dfb949c08bca2b5daf899e6ca0d8a0b33e53bbd9b2f157"} Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.653130 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" event={"ID":"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3","Type":"ContainerStarted","Data":"319ec13c983da8a38bd457d635a775cbd0278f6ea362178aba7064725de01867"} Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.654665 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.679946 4736 scope.go:117] "RemoveContainer" containerID="76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b" Feb 14 10:45:04 crc kubenswrapper[4736]: E0214 10:45:04.683357 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b\": container with ID starting with 76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b not found: ID does not exist" containerID="76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.683405 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b"} err="failed to get container status \"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b\": rpc error: code = NotFound desc = could not find container \"76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b\": container with ID starting with 76b63be1ed2f9930cd3c8ed98b303c0235bf8652b186f322cd416d7ffcfbd55b not found: ID does not exist" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.688912 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.689006 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmw46\" (UniqueName: \"kubernetes.io/projected/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-kube-api-access-rmw46\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.689476 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.689513 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.689527 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.699622 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.722242 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d5ff8f8c5-px5qk"] Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.723722 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" podStartSLOduration=9.723703852 podStartE2EDuration="9.723703852s" podCreationTimestamp="2026-02-14 10:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:04.723441364 +0000 UTC m=+215.092068752" watchObservedRunningTime="2026-02-14 10:45:04.723703852 +0000 UTC m=+215.092331220" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.747378 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.818548 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 10:45:04 crc kubenswrapper[4736]: E0214 10:45:04.820852 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" containerName="controller-manager" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.820870 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" containerName="controller-manager" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.820965 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" containerName="controller-manager" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.821299 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.832730 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.893485 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.893675 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.893764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.989537 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.994569 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.994652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.994695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.994876 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:04 crc kubenswrapper[4736]: I0214 10:45:04.995001 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.022757 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access\") pod \"installer-9-crc\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.095372 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume\") pod \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.095500 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9zft\" (UniqueName: \"kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft\") pod \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.095536 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume\") pod \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\" (UID: \"4cadeff5-99fa-4350-ab0b-fafde7e713a1\") " Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.096438 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "4cadeff5-99fa-4350-ab0b-fafde7e713a1" (UID: "4cadeff5-99fa-4350-ab0b-fafde7e713a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.099044 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4cadeff5-99fa-4350-ab0b-fafde7e713a1" (UID: "4cadeff5-99fa-4350-ab0b-fafde7e713a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.100019 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft" (OuterVolumeSpecName: "kube-api-access-d9zft") pod "4cadeff5-99fa-4350-ab0b-fafde7e713a1" (UID: "4cadeff5-99fa-4350-ab0b-fafde7e713a1"). InnerVolumeSpecName "kube-api-access-d9zft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.154495 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.196283 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9zft\" (UniqueName: \"kubernetes.io/projected/4cadeff5-99fa-4350-ab0b-fafde7e713a1-kube-api-access-d9zft\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.196317 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cadeff5-99fa-4350-ab0b-fafde7e713a1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.196327 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cadeff5-99fa-4350-ab0b-fafde7e713a1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.417882 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.662680 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerStarted","Data":"0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185"} Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.665184 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e745f80a-00b6-4114-8b93-60a2471d6622","Type":"ContainerStarted","Data":"932213ecafd094d32425ec0414b9b0f10a35c53709d4e6b3a6b483241a3588fd"} Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.668222 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerStarted","Data":"e06c0ec241c22f42b19e237f1068709333abf0781e1a71707c30f51be5b740f9"} Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.670295 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.672916 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657" event={"ID":"4cadeff5-99fa-4350-ab0b-fafde7e713a1","Type":"ContainerDied","Data":"683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789"} Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.672966 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683f04f005c092a7882773759aca1349bda9f4f182f1a5ed52faef57fa3dd789" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.679790 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prk5f" podStartSLOduration=3.958295295 podStartE2EDuration="46.67967397s" podCreationTimestamp="2026-02-14 10:44:19 +0000 UTC" firstStartedPulling="2026-02-14 10:44:21.87501059 +0000 UTC m=+172.243637958" lastFinishedPulling="2026-02-14 10:45:04.596389265 +0000 UTC m=+214.965016633" observedRunningTime="2026-02-14 10:45:05.67795398 +0000 UTC m=+216.046581398" watchObservedRunningTime="2026-02-14 10:45:05.67967397 +0000 UTC m=+216.048301348" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.709552 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpp7r" podStartSLOduration=3.5459672639999997 podStartE2EDuration="45.709531107s" podCreationTimestamp="2026-02-14 10:44:20 +0000 UTC" firstStartedPulling="2026-02-14 10:44:23.070433065 +0000 UTC m=+173.439060433" lastFinishedPulling="2026-02-14 10:45:05.233996898 +0000 UTC m=+215.602624276" observedRunningTime="2026-02-14 10:45:05.693866812 +0000 UTC m=+216.062494190" watchObservedRunningTime="2026-02-14 10:45:05.709531107 +0000 UTC m=+216.078158485" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.793169 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:05 crc kubenswrapper[4736]: E0214 10:45:05.793403 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cadeff5-99fa-4350-ab0b-fafde7e713a1" containerName="collect-profiles" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.793419 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cadeff5-99fa-4350-ab0b-fafde7e713a1" containerName="collect-profiles" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.793568 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cadeff5-99fa-4350-ab0b-fafde7e713a1" containerName="collect-profiles" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.794054 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.810184 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.810431 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812201 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812262 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812285 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812347 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812357 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812382 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtz2\" (UniqueName: \"kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812430 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.812588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.816397 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.819931 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.913239 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.913493 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.913516 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txtz2\" (UniqueName: \"kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.913540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.913620 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.914449 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.915172 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.917124 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.920708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.929264 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txtz2\" (UniqueName: \"kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2\") pod \"controller-manager-b6cd844d8-qwp2r\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:05 crc kubenswrapper[4736]: I0214 10:45:05.996312 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.106996 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.115730 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access\") pod \"0426dcf0-a494-41fc-9029-0779308eabe6\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.115818 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir\") pod \"0426dcf0-a494-41fc-9029-0779308eabe6\" (UID: \"0426dcf0-a494-41fc-9029-0779308eabe6\") " Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.115961 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0426dcf0-a494-41fc-9029-0779308eabe6" (UID: "0426dcf0-a494-41fc-9029-0779308eabe6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.116169 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0426dcf0-a494-41fc-9029-0779308eabe6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.118499 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0426dcf0-a494-41fc-9029-0779308eabe6" (UID: "0426dcf0-a494-41fc-9029-0779308eabe6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.217335 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0426dcf0-a494-41fc-9029-0779308eabe6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.407109 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c" path="/var/lib/kubelet/pods/3d66609c-1c17-4cc0-9b1e-15b5ed50cc7c/volumes" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.549792 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.676016 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e745f80a-00b6-4114-8b93-60a2471d6622","Type":"ContainerStarted","Data":"48f5be9d12089f5f2709457e063b6c0dfd7af91fbc911fb602668ed8cc903a9b"} Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.678189 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0426dcf0-a494-41fc-9029-0779308eabe6","Type":"ContainerDied","Data":"0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9"} Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.678213 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0978884f35f2631abbf9c88629f444eb1e4a995ca887592c1187dc04c2b23ce9" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.678263 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.679367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" event={"ID":"d8449684-670a-40fc-9baf-b727c037d806","Type":"ContainerStarted","Data":"ce91fbdbe95a2d85a7e7441aa5eb409ba18e798ca586e577eeb63a21f553777f"} Feb 14 10:45:06 crc kubenswrapper[4736]: I0214 10:45:06.693880 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.6938638790000002 podStartE2EDuration="2.693863879s" podCreationTimestamp="2026-02-14 10:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:06.69287601 +0000 UTC m=+217.061503378" watchObservedRunningTime="2026-02-14 10:45:06.693863879 +0000 UTC m=+217.062491247" Feb 14 10:45:07 crc kubenswrapper[4736]: I0214 10:45:07.685160 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" event={"ID":"d8449684-670a-40fc-9baf-b727c037d806","Type":"ContainerStarted","Data":"272cf39630c6c9a37a791f5bc8754bad70b600a1c4866b93e63f7f2d82a790d8"} Feb 14 10:45:07 crc kubenswrapper[4736]: I0214 10:45:07.705790 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" podStartSLOduration=12.705770402 podStartE2EDuration="12.705770402s" podCreationTimestamp="2026-02-14 10:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:45:07.702829267 +0000 UTC m=+218.071456655" watchObservedRunningTime="2026-02-14 10:45:07.705770402 +0000 UTC m=+218.074397780" Feb 14 10:45:08 crc kubenswrapper[4736]: I0214 10:45:08.689893 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:08 crc kubenswrapper[4736]: I0214 10:45:08.693694 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:09 crc kubenswrapper[4736]: I0214 10:45:09.647891 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:09 crc kubenswrapper[4736]: I0214 10:45:09.647973 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:09 crc kubenswrapper[4736]: I0214 10:45:09.968852 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:10 crc kubenswrapper[4736]: I0214 10:45:10.028943 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:10 crc kubenswrapper[4736]: I0214 10:45:10.197355 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:45:11 crc kubenswrapper[4736]: I0214 10:45:11.306449 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:11 crc kubenswrapper[4736]: I0214 10:45:11.306548 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:11 crc kubenswrapper[4736]: I0214 10:45:11.358072 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:11 crc kubenswrapper[4736]: I0214 10:45:11.709727 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prk5f" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="registry-server" containerID="cri-o://0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185" gracePeriod=2 Feb 14 10:45:11 crc kubenswrapper[4736]: I0214 10:45:11.762330 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:12 crc kubenswrapper[4736]: E0214 10:45:12.346849 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9c6f99_33a8_48c1_8ecf_56a4e9b4ec8e.slice/crio-conmon-0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185.scope\": RecentStats: unable to find data in memory cache]" Feb 14 10:45:12 crc kubenswrapper[4736]: I0214 10:45:12.605013 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:45:12 crc kubenswrapper[4736]: I0214 10:45:12.746274 4736 generic.go:334] "Generic (PLEG): container finished" podID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerID="0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185" exitCode=0 Feb 14 10:45:12 crc kubenswrapper[4736]: I0214 10:45:12.747772 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerDied","Data":"0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185"} Feb 14 10:45:12 crc kubenswrapper[4736]: I0214 10:45:12.886620 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.006374 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities\") pod \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.006665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content\") pod \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.006837 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jb48\" (UniqueName: \"kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48\") pod \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\" (UID: \"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e\") " Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.007226 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities" (OuterVolumeSpecName: "utilities") pod "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" (UID: "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.007577 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.015034 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48" (OuterVolumeSpecName: "kube-api-access-9jb48") pod "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" (UID: "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e"). InnerVolumeSpecName "kube-api-access-9jb48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.065352 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" (UID: "ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.108815 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.108850 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jb48\" (UniqueName: \"kubernetes.io/projected/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e-kube-api-access-9jb48\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.754339 4736 generic.go:334] "Generic (PLEG): container finished" podID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerID="25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742" exitCode=0 Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.754372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerDied","Data":"25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742"} Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.757580 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpp7r" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="registry-server" containerID="cri-o://e06c0ec241c22f42b19e237f1068709333abf0781e1a71707c30f51be5b740f9" gracePeriod=2 Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.757909 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prk5f" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.761911 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prk5f" event={"ID":"ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e","Type":"ContainerDied","Data":"e86eb71c345b0bd74ff52d4fe3bf8c4f2840d73813c761085a7491023f842ac2"} Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.761966 4736 scope.go:117] "RemoveContainer" containerID="0d55e358d7f812d69544bfba9eff1dfde224a7456597fc2b5c0ec8eee5267185" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.780938 4736 scope.go:117] "RemoveContainer" containerID="a2fd5103992dfb2bfb8a40fc6cdd1f95554d2532e1ca551ae87432aed11e4cc1" Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.792734 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.795586 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prk5f"] Feb 14 10:45:13 crc kubenswrapper[4736]: I0214 10:45:13.822794 4736 scope.go:117] "RemoveContainer" containerID="16975f948dfce7f3bcf8f4c72c497bda768f747ab58289dd05ded2162cbf837f" Feb 14 10:45:14 crc kubenswrapper[4736]: I0214 10:45:14.403967 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" path="/var/lib/kubelet/pods/ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e/volumes" Feb 14 10:45:14 crc kubenswrapper[4736]: I0214 10:45:14.766970 4736 generic.go:334] "Generic (PLEG): container finished" podID="33691b33-a810-4692-a71a-0a570d29c6e8" containerID="e06c0ec241c22f42b19e237f1068709333abf0781e1a71707c30f51be5b740f9" exitCode=0 Feb 14 10:45:14 crc kubenswrapper[4736]: I0214 10:45:14.767050 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerDied","Data":"e06c0ec241c22f42b19e237f1068709333abf0781e1a71707c30f51be5b740f9"} Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.366904 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.447854 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content\") pod \"33691b33-a810-4692-a71a-0a570d29c6e8\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.447919 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzmxs\" (UniqueName: \"kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs\") pod \"33691b33-a810-4692-a71a-0a570d29c6e8\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.447951 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities\") pod \"33691b33-a810-4692-a71a-0a570d29c6e8\" (UID: \"33691b33-a810-4692-a71a-0a570d29c6e8\") " Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.449685 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities" (OuterVolumeSpecName: "utilities") pod "33691b33-a810-4692-a71a-0a570d29c6e8" (UID: "33691b33-a810-4692-a71a-0a570d29c6e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.453384 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs" (OuterVolumeSpecName: "kube-api-access-pzmxs") pod "33691b33-a810-4692-a71a-0a570d29c6e8" (UID: "33691b33-a810-4692-a71a-0a570d29c6e8"). InnerVolumeSpecName "kube-api-access-pzmxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.549691 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzmxs\" (UniqueName: \"kubernetes.io/projected/33691b33-a810-4692-a71a-0a570d29c6e8-kube-api-access-pzmxs\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.549723 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.775439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpp7r" event={"ID":"33691b33-a810-4692-a71a-0a570d29c6e8","Type":"ContainerDied","Data":"236e5e6e98df17f35096e2d43dd094f33175ca1a5de75ed1053cb95d010e4a08"} Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.775519 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpp7r" Feb 14 10:45:15 crc kubenswrapper[4736]: I0214 10:45:15.775558 4736 scope.go:117] "RemoveContainer" containerID="e06c0ec241c22f42b19e237f1068709333abf0781e1a71707c30f51be5b740f9" Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.110146 4736 scope.go:117] "RemoveContainer" containerID="40b23516961095a7c6ca32e21cbd18bb6e57f2041b9e5d94f8a6727aa224dbce" Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.278180 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33691b33-a810-4692-a71a-0a570d29c6e8" (UID: "33691b33-a810-4692-a71a-0a570d29c6e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.279320 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33691b33-a810-4692-a71a-0a570d29c6e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.419771 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.424245 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpp7r"] Feb 14 10:45:16 crc kubenswrapper[4736]: I0214 10:45:16.485804 4736 scope.go:117] "RemoveContainer" containerID="0b3afc9e11723e5860d3e01a155969fac99e616299deea91c5a1de71cc9e5b97" Feb 14 10:45:17 crc kubenswrapper[4736]: I0214 10:45:17.696867 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:45:17 crc kubenswrapper[4736]: I0214 10:45:17.697202 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:45:17 crc kubenswrapper[4736]: I0214 10:45:17.697245 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:45:17 crc kubenswrapper[4736]: I0214 10:45:17.792401 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 10:45:17 crc kubenswrapper[4736]: I0214 10:45:17.792503 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299" gracePeriod=600 Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.450039 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" path="/var/lib/kubelet/pods/33691b33-a810-4692-a71a-0a570d29c6e8/volumes" Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.808190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerStarted","Data":"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63"} Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.811163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerStarted","Data":"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f"} Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.813590 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299" exitCode=0 Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.813645 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299"} Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.815322 4736 generic.go:334] "Generic (PLEG): container finished" podID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerID="7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534" exitCode=0 Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.815370 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerDied","Data":"7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534"} Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.818478 4736 generic.go:334] "Generic (PLEG): container finished" podID="c387581c-aaa7-4dbb-875a-8c506635f598" containerID="54767d63a0c55bd37d80d162602d80d47e52fbb27ba7bafa00c0caa1a0ddd763" exitCode=0 Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.818513 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerDied","Data":"54767d63a0c55bd37d80d162602d80d47e52fbb27ba7bafa00c0caa1a0ddd763"} Feb 14 10:45:18 crc kubenswrapper[4736]: I0214 10:45:18.852599 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpgts" podStartSLOduration=4.853821448 podStartE2EDuration="1m0.852581365s" podCreationTimestamp="2026-02-14 10:44:18 +0000 UTC" firstStartedPulling="2026-02-14 10:44:21.701035394 +0000 UTC m=+172.069662762" lastFinishedPulling="2026-02-14 10:45:17.699795311 +0000 UTC m=+228.068422679" observedRunningTime="2026-02-14 10:45:18.831889694 +0000 UTC m=+229.200517072" watchObservedRunningTime="2026-02-14 10:45:18.852581365 +0000 UTC m=+229.221208733" Feb 14 10:45:19 crc kubenswrapper[4736]: I0214 10:45:19.409420 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:45:19 crc kubenswrapper[4736]: I0214 10:45:19.409611 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:45:19 crc kubenswrapper[4736]: I0214 10:45:19.519599 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7hmxn"] Feb 14 10:45:19 crc kubenswrapper[4736]: I0214 10:45:19.826407 4736 generic.go:334] "Generic (PLEG): container finished" podID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerID="fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f" exitCode=0 Feb 14 10:45:19 crc kubenswrapper[4736]: I0214 10:45:19.826540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerDied","Data":"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f"} Feb 14 10:45:20 crc kubenswrapper[4736]: I0214 10:45:20.452838 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hpgts" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="registry-server" probeResult="failure" output=< Feb 14 10:45:20 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 10:45:20 crc kubenswrapper[4736]: > Feb 14 10:45:20 crc kubenswrapper[4736]: I0214 10:45:20.833921 4736 generic.go:334] "Generic (PLEG): container finished" podID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerID="ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9" exitCode=0 Feb 14 10:45:20 crc kubenswrapper[4736]: I0214 10:45:20.833989 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerDied","Data":"ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9"} Feb 14 10:45:20 crc kubenswrapper[4736]: I0214 10:45:20.837136 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c"} Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.851256 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerStarted","Data":"4bf7173b3e5fde323efc8d9e428597ce3d7bb3e6b632edd1ea582aff85c119ea"} Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.854270 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerStarted","Data":"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a"} Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.856179 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerStarted","Data":"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425"} Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.873027 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pstv4" podStartSLOduration=4.754171401 podStartE2EDuration="1m4.873006576s" podCreationTimestamp="2026-02-14 10:44:18 +0000 UTC" firstStartedPulling="2026-02-14 10:44:21.936436566 +0000 UTC m=+172.305063935" lastFinishedPulling="2026-02-14 10:45:22.055271742 +0000 UTC m=+232.423899110" observedRunningTime="2026-02-14 10:45:22.869598947 +0000 UTC m=+233.238226345" watchObservedRunningTime="2026-02-14 10:45:22.873006576 +0000 UTC m=+233.241633944" Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.889560 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46g9b" podStartSLOduration=3.825696911 podStartE2EDuration="1m1.889544736s" podCreationTimestamp="2026-02-14 10:44:21 +0000 UTC" firstStartedPulling="2026-02-14 10:44:24.186293725 +0000 UTC m=+174.554921093" lastFinishedPulling="2026-02-14 10:45:22.25014155 +0000 UTC m=+232.618768918" observedRunningTime="2026-02-14 10:45:22.88831487 +0000 UTC m=+233.256942238" watchObservedRunningTime="2026-02-14 10:45:22.889544736 +0000 UTC m=+233.258172104" Feb 14 10:45:22 crc kubenswrapper[4736]: I0214 10:45:22.908999 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kqrw8" podStartSLOduration=4.050966571 podStartE2EDuration="1m2.90898059s" podCreationTimestamp="2026-02-14 10:44:20 +0000 UTC" firstStartedPulling="2026-02-14 10:44:23.02255455 +0000 UTC m=+173.391181918" lastFinishedPulling="2026-02-14 10:45:21.880568569 +0000 UTC m=+232.249195937" observedRunningTime="2026-02-14 10:45:22.905386596 +0000 UTC m=+233.274013994" watchObservedRunningTime="2026-02-14 10:45:22.90898059 +0000 UTC m=+233.277607968" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.396809 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.397196 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.449452 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.452060 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.502531 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:45:29 crc kubenswrapper[4736]: I0214 10:45:29.934452 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:30 crc kubenswrapper[4736]: I0214 10:45:30.918329 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:45:30 crc kubenswrapper[4736]: I0214 10:45:30.918668 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:45:30 crc kubenswrapper[4736]: I0214 10:45:30.956936 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:45:31 crc kubenswrapper[4736]: I0214 10:45:31.934852 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.136197 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.136356 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.175251 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.203036 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.203297 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pstv4" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="registry-server" containerID="cri-o://4bf7173b3e5fde323efc8d9e428597ce3d7bb3e6b632edd1ea582aff85c119ea" gracePeriod=2 Feb 14 10:45:32 crc kubenswrapper[4736]: I0214 10:45:32.910072 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerStarted","Data":"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61"} Feb 14 10:45:33 crc kubenswrapper[4736]: I0214 10:45:33.407489 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:45:33 crc kubenswrapper[4736]: I0214 10:45:33.918934 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerStarted","Data":"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e"} Feb 14 10:45:33 crc kubenswrapper[4736]: I0214 10:45:33.949557 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxsl4" podStartSLOduration=8.381660638 podStartE2EDuration="1m15.949540027s" podCreationTimestamp="2026-02-14 10:44:18 +0000 UTC" firstStartedPulling="2026-02-14 10:44:23.048903684 +0000 UTC m=+173.417531052" lastFinishedPulling="2026-02-14 10:45:30.616783083 +0000 UTC m=+240.985410441" observedRunningTime="2026-02-14 10:45:33.944758378 +0000 UTC m=+244.313385746" watchObservedRunningTime="2026-02-14 10:45:33.949540027 +0000 UTC m=+244.318167395" Feb 14 10:45:34 crc kubenswrapper[4736]: I0214 10:45:34.925397 4736 generic.go:334] "Generic (PLEG): container finished" podID="c387581c-aaa7-4dbb-875a-8c506635f598" containerID="4bf7173b3e5fde323efc8d9e428597ce3d7bb3e6b632edd1ea582aff85c119ea" exitCode=0 Feb 14 10:45:34 crc kubenswrapper[4736]: I0214 10:45:34.925492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerDied","Data":"4bf7173b3e5fde323efc8d9e428597ce3d7bb3e6b632edd1ea582aff85c119ea"} Feb 14 10:45:34 crc kubenswrapper[4736]: I0214 10:45:34.927234 4736 generic.go:334] "Generic (PLEG): container finished" podID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerID="7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61" exitCode=0 Feb 14 10:45:34 crc kubenswrapper[4736]: I0214 10:45:34.927309 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerDied","Data":"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61"} Feb 14 10:45:35 crc kubenswrapper[4736]: I0214 10:45:35.325640 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:35 crc kubenswrapper[4736]: I0214 10:45:35.325856 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" podUID="d8449684-670a-40fc-9baf-b727c037d806" containerName="controller-manager" containerID="cri-o://272cf39630c6c9a37a791f5bc8754bad70b600a1c4866b93e63f7f2d82a790d8" gracePeriod=30 Feb 14 10:45:35 crc kubenswrapper[4736]: I0214 10:45:35.429863 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:35 crc kubenswrapper[4736]: I0214 10:45:35.430060 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" podUID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" containerName="route-controller-manager" containerID="cri-o://1c81c27198ec42ea97dfb949c08bca2b5daf899e6ca0d8a0b33e53bbd9b2f157" gracePeriod=30 Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.046213 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.108738 4736 patch_prober.go:28] interesting pod/controller-manager-b6cd844d8-qwp2r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.108802 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" podUID="d8449684-670a-40fc-9baf-b727c037d806" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.175982 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities\") pod \"c387581c-aaa7-4dbb-875a-8c506635f598\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.176067 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjdq\" (UniqueName: \"kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq\") pod \"c387581c-aaa7-4dbb-875a-8c506635f598\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.176117 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content\") pod \"c387581c-aaa7-4dbb-875a-8c506635f598\" (UID: \"c387581c-aaa7-4dbb-875a-8c506635f598\") " Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.176894 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities" (OuterVolumeSpecName: "utilities") pod "c387581c-aaa7-4dbb-875a-8c506635f598" (UID: "c387581c-aaa7-4dbb-875a-8c506635f598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.180973 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq" (OuterVolumeSpecName: "kube-api-access-jbjdq") pod "c387581c-aaa7-4dbb-875a-8c506635f598" (UID: "c387581c-aaa7-4dbb-875a-8c506635f598"). InnerVolumeSpecName "kube-api-access-jbjdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.278042 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjdq\" (UniqueName: \"kubernetes.io/projected/c387581c-aaa7-4dbb-875a-8c506635f598-kube-api-access-jbjdq\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.278084 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.939215 4736 generic.go:334] "Generic (PLEG): container finished" podID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" containerID="1c81c27198ec42ea97dfb949c08bca2b5daf899e6ca0d8a0b33e53bbd9b2f157" exitCode=0 Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.939296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" event={"ID":"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3","Type":"ContainerDied","Data":"1c81c27198ec42ea97dfb949c08bca2b5daf899e6ca0d8a0b33e53bbd9b2f157"} Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.941426 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8449684-670a-40fc-9baf-b727c037d806" containerID="272cf39630c6c9a37a791f5bc8754bad70b600a1c4866b93e63f7f2d82a790d8" exitCode=0 Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.941483 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" event={"ID":"d8449684-670a-40fc-9baf-b727c037d806","Type":"ContainerDied","Data":"272cf39630c6c9a37a791f5bc8754bad70b600a1c4866b93e63f7f2d82a790d8"} Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.943603 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pstv4" event={"ID":"c387581c-aaa7-4dbb-875a-8c506635f598","Type":"ContainerDied","Data":"58c7d078c23303e5274a114811744956a69158728dfb533871ebaed7dc139227"} Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.943644 4736 scope.go:117] "RemoveContainer" containerID="4bf7173b3e5fde323efc8d9e428597ce3d7bb3e6b632edd1ea582aff85c119ea" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.943726 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pstv4" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.959888 4736 scope.go:117] "RemoveContainer" containerID="54767d63a0c55bd37d80d162602d80d47e52fbb27ba7bafa00c0caa1a0ddd763" Feb 14 10:45:36 crc kubenswrapper[4736]: I0214 10:45:36.975450 4736 scope.go:117] "RemoveContainer" containerID="12caeb67c220d9e384443bc85b2baf77541249902bca0f72beb63f10b5dd6d06" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.650004 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c387581c-aaa7-4dbb-875a-8c506635f598" (UID: "c387581c-aaa7-4dbb-875a-8c506635f598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.695546 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c387581c-aaa7-4dbb-875a-8c506635f598-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.817075 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.822146 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.883439 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.886469 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pstv4"] Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898504 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtz2\" (UniqueName: \"kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2\") pod \"d8449684-670a-40fc-9baf-b727c037d806\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898562 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert\") pod \"d8449684-670a-40fc-9baf-b727c037d806\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles\") pod \"d8449684-670a-40fc-9baf-b727c037d806\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898618 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config\") pod \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898664 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config\") pod \"d8449684-670a-40fc-9baf-b727c037d806\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898725 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert\") pod \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898800 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca\") pod \"d8449684-670a-40fc-9baf-b727c037d806\" (UID: \"d8449684-670a-40fc-9baf-b727c037d806\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898830 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gplwq\" (UniqueName: \"kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq\") pod \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.898916 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca\") pod \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\" (UID: \"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3\") " Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.899878 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" (UID: "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.899948 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config" (OuterVolumeSpecName: "config") pod "d8449684-670a-40fc-9baf-b727c037d806" (UID: "d8449684-670a-40fc-9baf-b727c037d806"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.899962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8449684-670a-40fc-9baf-b727c037d806" (UID: "d8449684-670a-40fc-9baf-b727c037d806"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.900247 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8449684-670a-40fc-9baf-b727c037d806" (UID: "d8449684-670a-40fc-9baf-b727c037d806"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.900978 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config" (OuterVolumeSpecName: "config") pod "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" (UID: "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.902661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2" (OuterVolumeSpecName: "kube-api-access-txtz2") pod "d8449684-670a-40fc-9baf-b727c037d806" (UID: "d8449684-670a-40fc-9baf-b727c037d806"). InnerVolumeSpecName "kube-api-access-txtz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.902729 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8449684-670a-40fc-9baf-b727c037d806" (UID: "d8449684-670a-40fc-9baf-b727c037d806"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.903871 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" (UID: "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.908322 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq" (OuterVolumeSpecName: "kube-api-access-gplwq") pod "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" (UID: "8d36a8e2-4dbc-4f55-8000-418ba5d63ae3"). InnerVolumeSpecName "kube-api-access-gplwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.950471 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" event={"ID":"8d36a8e2-4dbc-4f55-8000-418ba5d63ae3","Type":"ContainerDied","Data":"319ec13c983da8a38bd457d635a775cbd0278f6ea362178aba7064725de01867"} Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.950545 4736 scope.go:117] "RemoveContainer" containerID="1c81c27198ec42ea97dfb949c08bca2b5daf899e6ca0d8a0b33e53bbd9b2f157" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.950653 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.961359 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" event={"ID":"d8449684-670a-40fc-9baf-b727c037d806","Type":"ContainerDied","Data":"ce91fbdbe95a2d85a7e7441aa5eb409ba18e798ca586e577eeb63a21f553777f"} Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.961442 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6cd844d8-qwp2r" Feb 14 10:45:37 crc kubenswrapper[4736]: I0214 10:45:37.980881 4736 scope.go:117] "RemoveContainer" containerID="272cf39630c6c9a37a791f5bc8754bad70b600a1c4866b93e63f7f2d82a790d8" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000239 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000280 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000292 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gplwq\" (UniqueName: \"kubernetes.io/projected/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-kube-api-access-gplwq\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000306 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000316 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txtz2\" (UniqueName: \"kubernetes.io/projected/d8449684-670a-40fc-9baf-b727c037d806-kube-api-access-txtz2\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000325 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8449684-670a-40fc-9baf-b727c037d806-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000334 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000346 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.000357 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8449684-670a-40fc-9baf-b727c037d806-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.010954 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.016092 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b6cd844d8-qwp2r"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.026207 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.029644 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcffbfbdf-qh79v"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.408504 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" path="/var/lib/kubelet/pods/8d36a8e2-4dbc-4f55-8000-418ba5d63ae3/volumes" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.409648 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" path="/var/lib/kubelet/pods/c387581c-aaa7-4dbb-875a-8c506635f598/volumes" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.410937 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8449684-670a-40fc-9baf-b727c037d806" path="/var/lib/kubelet/pods/d8449684-670a-40fc-9baf-b727c037d806/volumes" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.831032 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832417 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" containerName="route-controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832466 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" containerName="route-controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832489 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832506 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832539 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832555 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832584 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832601 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832625 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0426dcf0-a494-41fc-9029-0779308eabe6" containerName="pruner" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832641 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0426dcf0-a494-41fc-9029-0779308eabe6" containerName="pruner" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832665 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8449684-670a-40fc-9baf-b727c037d806" containerName="controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832682 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8449684-670a-40fc-9baf-b727c037d806" containerName="controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832711 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832726 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832827 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832844 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="extract-content" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832864 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832880 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832899 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832914 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832939 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.832956 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: E0214 10:45:38.832984 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833004 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="extract-utilities" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833298 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8449684-670a-40fc-9baf-b727c037d806" containerName="controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833335 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9c6f99-33a8-48c1-8ecf-56a4e9b4ec8e" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833371 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c387581c-aaa7-4dbb-875a-8c506635f598" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833386 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="33691b33-a810-4692-a71a-0a570d29c6e8" containerName="registry-server" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833409 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d36a8e2-4dbc-4f55-8000-418ba5d63ae3" containerName="route-controller-manager" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.833429 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0426dcf0-a494-41fc-9029-0779308eabe6" containerName="pruner" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.834210 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.840639 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.841254 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.841691 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.841798 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.842097 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.842161 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.853735 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.856366 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.857083 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.861709 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.862204 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.862431 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.862672 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.862774 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.862926 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.866405 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.877781 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.913363 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwqn\" (UniqueName: \"kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.913501 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.913561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.913659 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:38 crc kubenswrapper[4736]: I0214 10:45:38.913687 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015464 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015527 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015590 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015633 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwqn\" (UniqueName: \"kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015702 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015736 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015863 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015933 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xp5\" (UniqueName: \"kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.015967 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.016663 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.017193 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.017619 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.027022 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.046138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwqn\" (UniqueName: \"kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn\") pod \"controller-manager-56856844c4-9gkzt\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.117347 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.117509 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.117820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xp5\" (UniqueName: \"kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.118491 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.120624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.124134 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.126912 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.151467 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xp5\" (UniqueName: \"kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5\") pod \"route-controller-manager-77fd8c5b8b-kftv6\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.165059 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.185414 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.291007 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.292072 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.343216 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.451387 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:45:39 crc kubenswrapper[4736]: W0214 10:45:39.458140 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f2aaf5_5d63_4892_9e32_0537daa7cf1f.slice/crio-a3fd5ca7f14eb77f48f667cecb3615caa863a86695b602c96776dfe4b933e42c WatchSource:0}: Error finding container a3fd5ca7f14eb77f48f667cecb3615caa863a86695b602c96776dfe4b933e42c: Status 404 returned error can't find the container with id a3fd5ca7f14eb77f48f667cecb3615caa863a86695b602c96776dfe4b933e42c Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.641171 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:45:39 crc kubenswrapper[4736]: W0214 10:45:39.647429 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe465cf_1a68_44cd_9f65_ffabb7ab311e.slice/crio-c122672d24f3b7cb62678a9d0272be6b94e999fc443a98769c0d0aec58b29b2a WatchSource:0}: Error finding container c122672d24f3b7cb62678a9d0272be6b94e999fc443a98769c0d0aec58b29b2a: Status 404 returned error can't find the container with id c122672d24f3b7cb62678a9d0272be6b94e999fc443a98769c0d0aec58b29b2a Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.982163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" event={"ID":"afe465cf-1a68-44cd-9f65-ffabb7ab311e","Type":"ContainerStarted","Data":"c122672d24f3b7cb62678a9d0272be6b94e999fc443a98769c0d0aec58b29b2a"} Feb 14 10:45:39 crc kubenswrapper[4736]: I0214 10:45:39.983584 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" event={"ID":"27f2aaf5-5d63-4892-9e32-0537daa7cf1f","Type":"ContainerStarted","Data":"a3fd5ca7f14eb77f48f667cecb3615caa863a86695b602c96776dfe4b933e42c"} Feb 14 10:45:40 crc kubenswrapper[4736]: I0214 10:45:40.053108 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.004153 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" event={"ID":"afe465cf-1a68-44cd-9f65-ffabb7ab311e","Type":"ContainerStarted","Data":"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd"} Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.005607 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" event={"ID":"27f2aaf5-5d63-4892-9e32-0537daa7cf1f","Type":"ContainerStarted","Data":"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026"} Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.311812 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312426 4736 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312582 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312675 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0" gracePeriod=15 Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312692 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911" gracePeriod=15 Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312793 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008" gracePeriod=15 Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312790 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be" gracePeriod=15 Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.312834 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf" gracePeriod=15 Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314557 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314776 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314791 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314807 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314814 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314824 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314833 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314848 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314857 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314869 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314877 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314887 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314894 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.314910 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.314918 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315033 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315045 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315056 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315065 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315077 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.315088 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.337585 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499178 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499232 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499256 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499288 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499317 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499414 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.499454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: E0214 10:45:43.546165 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-56856844c4-9gkzt.1894171796cfb8a7 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-56856844c4-9gkzt,UID:27f2aaf5-5d63-4892-9e32-0537daa7cf1f,APIVersion:v1,ResourceVersion:29678,FieldPath:spec.containers{controller-manager},},Reason:Started,Message:Started container controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 10:45:43.545329831 +0000 UTC m=+253.913957199,LastTimestamp:2026-02-14 10:45:43.545329831 +0000 UTC m=+253.913957199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600858 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600900 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600920 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600941 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.600995 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601019 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601041 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601122 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601156 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601176 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601195 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601211 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601229 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601247 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.601265 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: I0214 10:45:43.633880 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:45:43 crc kubenswrapper[4736]: W0214 10:45:43.661422 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6d93dc6b2acbe2324b2074ca7f98a253e8a4de3c9f3302d68b7fe616ee104c2c WatchSource:0}: Error finding container 6d93dc6b2acbe2324b2074ca7f98a253e8a4de3c9f3302d68b7fe616ee104c2c: Status 404 returned error can't find the container with id 6d93dc6b2acbe2324b2074ca7f98a253e8a4de3c9f3302d68b7fe616ee104c2c Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.010974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerStarted","Data":"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272"} Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.011929 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.012305 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.012546 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.013342 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.014268 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.015467 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf" exitCode=2 Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.016913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6d93dc6b2acbe2324b2074ca7f98a253e8a4de3c9f3302d68b7fe616ee104c2c"} Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.017644 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.017768 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.017860 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.018244 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.018586 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.019008 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.019387 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.019623 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.019877 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.020096 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.020293 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.022559 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.023016 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.023068 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.023278 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.023531 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.023785 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.024082 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.024455 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.024794 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.025055 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.025259 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.025565 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:44 crc kubenswrapper[4736]: I0214 10:45:44.579448 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" podUID="446f17e4-455e-45ae-affc-f27215421058" containerName="oauth-openshift" containerID="cri-o://2652dc5c51f482d1e5999027234e0dc13a9182674cf3b3a97f52ca82839d22db" gracePeriod=15 Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.025950 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.027484 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.028555 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911" exitCode=0 Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.028584 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008" exitCode=0 Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.028598 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be" exitCode=0 Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.028672 4736 scope.go:117] "RemoveContainer" containerID="8aa630ccdcd8728ba37bf7bca94415df8c12a0df818d5c833545f4a6bcdd4064" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.032172 4736 generic.go:334] "Generic (PLEG): container finished" podID="e745f80a-00b6-4114-8b93-60a2471d6622" containerID="48f5be9d12089f5f2709457e063b6c0dfd7af91fbc911fb602668ed8cc903a9b" exitCode=0 Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.032315 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e745f80a-00b6-4114-8b93-60a2471d6622","Type":"ContainerDied","Data":"48f5be9d12089f5f2709457e063b6c0dfd7af91fbc911fb602668ed8cc903a9b"} Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.033620 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.034031 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.034371 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.034689 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.034963 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:45 crc kubenswrapper[4736]: I0214 10:45:45.037788 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40"} Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.042674 4736 generic.go:334] "Generic (PLEG): container finished" podID="446f17e4-455e-45ae-affc-f27215421058" containerID="2652dc5c51f482d1e5999027234e0dc13a9182674cf3b3a97f52ca82839d22db" exitCode=0 Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.042717 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" event={"ID":"446f17e4-455e-45ae-affc-f27215421058","Type":"ContainerDied","Data":"2652dc5c51f482d1e5999027234e0dc13a9182674cf3b3a97f52ca82839d22db"} Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.046966 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.047617 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0" exitCode=0 Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.050219 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.050632 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.050930 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.051283 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.051458 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.115736 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.116224 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.116384 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.116843 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.117251 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.117535 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.117828 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244110 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244656 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft47j\" (UniqueName: \"kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244678 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244711 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244728 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244765 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244801 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244826 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244845 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244862 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244882 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244897 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.244917 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir\") pod \"446f17e4-455e-45ae-affc-f27215421058\" (UID: \"446f17e4-455e-45ae-affc-f27215421058\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.246615 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.246963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.250733 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.250816 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.255168 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.255634 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.259337 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.269561 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.271363 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.272278 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.273133 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.274237 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.277823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.281114 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j" (OuterVolumeSpecName: "kube-api-access-ft47j") pod "446f17e4-455e-45ae-affc-f27215421058" (UID: "446f17e4-455e-45ae-affc-f27215421058"). InnerVolumeSpecName "kube-api-access-ft47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.336118 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.336532 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.336726 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.336882 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.337025 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.337221 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.337508 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.345689 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock\") pod \"e745f80a-00b6-4114-8b93-60a2471d6622\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.345822 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock" (OuterVolumeSpecName: "var-lock") pod "e745f80a-00b6-4114-8b93-60a2471d6622" (UID: "e745f80a-00b6-4114-8b93-60a2471d6622"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.345893 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access\") pod \"e745f80a-00b6-4114-8b93-60a2471d6622\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347060 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir\") pod \"e745f80a-00b6-4114-8b93-60a2471d6622\" (UID: \"e745f80a-00b6-4114-8b93-60a2471d6622\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347317 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347335 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347347 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347359 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347372 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347386 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347496 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347509 4736 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/446f17e4-455e-45ae-affc-f27215421058-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347521 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347534 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347545 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft47j\" (UniqueName: \"kubernetes.io/projected/446f17e4-455e-45ae-affc-f27215421058-kube-api-access-ft47j\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347557 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347568 4736 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347579 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347591 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/446f17e4-455e-45ae-affc-f27215421058-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.347615 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e745f80a-00b6-4114-8b93-60a2471d6622" (UID: "e745f80a-00b6-4114-8b93-60a2471d6622"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.350674 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e745f80a-00b6-4114-8b93-60a2471d6622" (UID: "e745f80a-00b6-4114-8b93-60a2471d6622"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.448806 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e745f80a-00b6-4114-8b93-60a2471d6622-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.449196 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e745f80a-00b6-4114-8b93-60a2471d6622-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.538579 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.540991 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.541571 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.541935 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.542270 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.542606 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.542875 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.543118 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.543419 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.550985 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551024 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551042 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551078 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551117 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551202 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551298 4736 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551310 4736 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:46 crc kubenswrapper[4736]: I0214 10:45:46.551318 4736 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.052710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" event={"ID":"446f17e4-455e-45ae-affc-f27215421058","Type":"ContainerDied","Data":"9599e5f834240a82060e45653f21801d00ddd2626e1ba63b32ec874689215503"} Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.052774 4736 scope.go:117] "RemoveContainer" containerID="2652dc5c51f482d1e5999027234e0dc13a9182674cf3b3a97f52ca82839d22db" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.053709 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.054601 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.054867 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.055079 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.055347 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.055648 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.055848 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.056083 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.056877 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.058435 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.063734 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.064062 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.064375 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.064705 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.065341 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.065778 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e745f80a-00b6-4114-8b93-60a2471d6622","Type":"ContainerDied","Data":"932213ecafd094d32425ec0414b9b0f10a35c53709d4e6b3a6b483241a3588fd"} Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.065808 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932213ecafd094d32425ec0414b9b0f10a35c53709d4e6b3a6b483241a3588fd" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.065803 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.065942 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.066054 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.072093 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.072587 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.072643 4736 scope.go:117] "RemoveContainer" containerID="616cfa97ca145ac4ebc6df471de387450d00692cd829a673d9b015ca7ee19911" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.072836 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.073136 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.073818 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.074222 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.074667 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.075055 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.075277 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.075521 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.075776 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.075986 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.076270 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.076533 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.094458 4736 scope.go:117] "RemoveContainer" containerID="9a4f76e31b7e70410e208abed4e42cf1608f548e34563e4f4e1b2032f42b0008" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.111829 4736 scope.go:117] "RemoveContainer" containerID="a8f051b8cc8791b138b579435e6bef63a816ea27ce063ca657f462269b77b5be" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.123845 4736 scope.go:117] "RemoveContainer" containerID="5c63446a32381c037e3e1c70b3f2edecbad62bbf9f47e00a1d127e945f3c30cf" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.135653 4736 scope.go:117] "RemoveContainer" containerID="c1a3be51167e400b5adee2048024defdfb76ac6768d86e572218eb5b3537d8a0" Feb 14 10:45:47 crc kubenswrapper[4736]: I0214 10:45:47.151996 4736 scope.go:117] "RemoveContainer" containerID="29015b68b6562dff954f11a9975781a503f3468aa83f4e9012675d8966fbf05f" Feb 14 10:45:48 crc kubenswrapper[4736]: I0214 10:45:48.403629 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 14 10:45:49 crc kubenswrapper[4736]: E0214 10:45:49.435009 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-56856844c4-9gkzt.1894171796cfb8a7 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-56856844c4-9gkzt,UID:27f2aaf5-5d63-4892-9e32-0537daa7cf1f,APIVersion:v1,ResourceVersion:29678,FieldPath:spec.containers{controller-manager},},Reason:Started,Message:Started container controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 10:45:43.545329831 +0000 UTC m=+253.913957199,LastTimestamp:2026-02-14 10:45:43.545329831 +0000 UTC m=+253.913957199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.402939 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.404559 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.404986 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.405396 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.405715 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.406171 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.611405 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.611633 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.611961 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.612209 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.612540 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:50 crc kubenswrapper[4736]: I0214 10:45:50.612563 4736 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.612790 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Feb 14 10:45:50 crc kubenswrapper[4736]: E0214 10:45:50.814375 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Feb 14 10:45:51 crc kubenswrapper[4736]: E0214 10:45:51.215722 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Feb 14 10:45:52 crc kubenswrapper[4736]: E0214 10:45:52.016617 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.521245 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.521609 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.597985 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.598630 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.599029 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.599378 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.599841 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.600677 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:52 crc kubenswrapper[4736]: I0214 10:45:52.601027 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.159779 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.161317 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.162146 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.162390 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.162612 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.162908 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: I0214 10:45:53.163149 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:53 crc kubenswrapper[4736]: E0214 10:45:53.618664 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.132058 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.132332 4736 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd" exitCode=1 Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.132368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd"} Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.132932 4736 scope.go:117] "RemoveContainer" containerID="6cd0bf48d9c043b0d8fb8da88bc3d7a5c8a8909d1d898fba9b45a2ad062c60bd" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.133250 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.133890 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.134517 4736 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.135114 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.135396 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.135656 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: I0214 10:45:56.135951 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:56 crc kubenswrapper[4736]: E0214 10:45:56.820037 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="6.4s" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.140677 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.140775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba64180d866ff25534e320115ef8a9d40a662b967a45d539b3032dc8c9a62331"} Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.141967 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.142663 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.152588 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.153999 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.154534 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.155334 4736 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.155630 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.396869 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.398070 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.399000 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.399570 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.400152 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.400644 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.401031 4736 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.401359 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.420167 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.420210 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:45:57 crc kubenswrapper[4736]: E0214 10:45:57.420623 4736 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:57 crc kubenswrapper[4736]: I0214 10:45:57.421169 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.148817 4736 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4390791f9a5a202e7223c5ae9db708a2017b45ac70a0353998492df7099962d3" exitCode=0 Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.148858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4390791f9a5a202e7223c5ae9db708a2017b45ac70a0353998492df7099962d3"} Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.148883 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"76292469d4ebc19208a927452a1224ef845e0cd187490a97b0eeb2e9b253bbb8"} Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.149130 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.149144 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:45:58 crc kubenswrapper[4736]: E0214 10:45:58.149531 4736 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.149685 4736 status_manager.go:851] "Failed to get status for pod" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" pod="openshift-marketplace/redhat-operators-n9hq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n9hq7\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.150107 4736 status_manager.go:851] "Failed to get status for pod" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-77fd8c5b8b-kftv6\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.150418 4736 status_manager.go:851] "Failed to get status for pod" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-56856844c4-9gkzt\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.150599 4736 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.150843 4736 status_manager.go:851] "Failed to get status for pod" podUID="446f17e4-455e-45ae-affc-f27215421058" pod="openshift-authentication/oauth-openshift-558db77b4-7hmxn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-7hmxn\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.151099 4736 status_manager.go:851] "Failed to get status for pod" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:58 crc kubenswrapper[4736]: I0214 10:45:58.151355 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Feb 14 10:45:59 crc kubenswrapper[4736]: I0214 10:45:59.167974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54526febfb16261e347d17e686f1e6730c4e1669ccf27dd0ff206c79be93e75c"} Feb 14 10:45:59 crc kubenswrapper[4736]: I0214 10:45:59.168282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a1dc96be83d88a3a6b0a74235fb50465c33ccb8b33e229383455385c8515397b"} Feb 14 10:45:59 crc kubenswrapper[4736]: I0214 10:45:59.168294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b0cd27c3cf4391bf6e6f1fb61bd6073f390ff6f52f652bf2bf3d97f34a6e33d7"} Feb 14 10:45:59 crc kubenswrapper[4736]: I0214 10:45:59.168304 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e81a527cdbfe36f18036063fd6579752975c3a5d587e0d73486fe2bbe39bbb2"} Feb 14 10:45:59 crc kubenswrapper[4736]: I0214 10:45:59.209564 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.175300 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4c9bebfccced3a4954e967df8171decd3a31759bef5dcd5c1870c5e42b41e0f"} Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.175702 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.175809 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.175833 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.178075 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.178272 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 14 10:46:00 crc kubenswrapper[4736]: I0214 10:46:00.178306 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.369683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.369843 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.369900 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.370044 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.371539 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.372129 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.372564 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.381224 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.381696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.390949 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.395673 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.397824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.423370 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.439434 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 10:46:01 crc kubenswrapper[4736]: I0214 10:46:01.454571 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:46:01 crc kubenswrapper[4736]: W0214 10:46:01.883175 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-52e6c94b0d48ef22ef496385ab4cf6eb2b1bb1ba3ea7b11a7b25e12fc31db335 WatchSource:0}: Error finding container 52e6c94b0d48ef22ef496385ab4cf6eb2b1bb1ba3ea7b11a7b25e12fc31db335: Status 404 returned error can't find the container with id 52e6c94b0d48ef22ef496385ab4cf6eb2b1bb1ba3ea7b11a7b25e12fc31db335 Feb 14 10:46:01 crc kubenswrapper[4736]: W0214 10:46:01.914097 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f33c8077beff79aa36edffe01bec3aa8c521e885dfefb0e696d30a5ea4fd12bd WatchSource:0}: Error finding container f33c8077beff79aa36edffe01bec3aa8c521e885dfefb0e696d30a5ea4fd12bd: Status 404 returned error can't find the container with id f33c8077beff79aa36edffe01bec3aa8c521e885dfefb0e696d30a5ea4fd12bd Feb 14 10:46:02 crc kubenswrapper[4736]: W0214 10:46:02.015432 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0de674d8be960e150976fdfc4e2c81c9230bb49b355392f4d04083b7d3d81eea WatchSource:0}: Error finding container 0de674d8be960e150976fdfc4e2c81c9230bb49b355392f4d04083b7d3d81eea: Status 404 returned error can't find the container with id 0de674d8be960e150976fdfc4e2c81c9230bb49b355392f4d04083b7d3d81eea Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.185576 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ab3fbe45a7ec7dc23d115c608e26cddb3ddc6afe1d04e6279eaeceb36bad9bee"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.185659 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f33c8077beff79aa36edffe01bec3aa8c521e885dfefb0e696d30a5ea4fd12bd"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.186539 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e299174a4edd62771b112978a6a2a9d1f91218686b2bc63963b684b716dd9d1f"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.186585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0de674d8be960e150976fdfc4e2c81c9230bb49b355392f4d04083b7d3d81eea"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.187111 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.188930 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f71a1c2055161e658837c10f16a5ccf8a8893af4292cfe80bb878bc1b546450f"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.188994 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"52e6c94b0d48ef22ef496385ab4cf6eb2b1bb1ba3ea7b11a7b25e12fc31db335"} Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.422652 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.423031 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:02 crc kubenswrapper[4736]: I0214 10:46:02.427855 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:04 crc kubenswrapper[4736]: I0214 10:46:04.203851 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 14 10:46:04 crc kubenswrapper[4736]: I0214 10:46:04.204063 4736 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="f71a1c2055161e658837c10f16a5ccf8a8893af4292cfe80bb878bc1b546450f" exitCode=255 Feb 14 10:46:04 crc kubenswrapper[4736]: I0214 10:46:04.204089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"f71a1c2055161e658837c10f16a5ccf8a8893af4292cfe80bb878bc1b546450f"} Feb 14 10:46:04 crc kubenswrapper[4736]: I0214 10:46:04.204420 4736 scope.go:117] "RemoveContainer" containerID="f71a1c2055161e658837c10f16a5ccf8a8893af4292cfe80bb878bc1b546450f" Feb 14 10:46:05 crc kubenswrapper[4736]: I0214 10:46:05.202677 4736 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:05 crc kubenswrapper[4736]: I0214 10:46:05.209816 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 14 10:46:05 crc kubenswrapper[4736]: I0214 10:46:05.209868 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d"} Feb 14 10:46:05 crc kubenswrapper[4736]: I0214 10:46:05.413296 4736 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2e88c0d2-37bd-4dfe-81ab-9568d646291a" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.216811 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218034 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218114 4736 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d" exitCode=255 Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d"} Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218351 4736 scope.go:117] "RemoveContainer" containerID="f71a1c2055161e658837c10f16a5ccf8a8893af4292cfe80bb878bc1b546450f" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218547 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218578 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.218843 4736 scope.go:117] "RemoveContainer" containerID="dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d" Feb 14 10:46:06 crc kubenswrapper[4736]: E0214 10:46:06.219139 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:46:06 crc kubenswrapper[4736]: I0214 10:46:06.224961 4736 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2e88c0d2-37bd-4dfe-81ab-9568d646291a" Feb 14 10:46:07 crc kubenswrapper[4736]: I0214 10:46:07.227002 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 14 10:46:10 crc kubenswrapper[4736]: I0214 10:46:10.187964 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:46:10 crc kubenswrapper[4736]: I0214 10:46:10.196890 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 10:46:14 crc kubenswrapper[4736]: I0214 10:46:14.497752 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.148330 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.456538 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.582207 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.702663 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.751176 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 10:46:15 crc kubenswrapper[4736]: I0214 10:46:15.930412 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.016250 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.016496 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.019665 4736 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.044914 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.136628 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.405289 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.644101 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.724612 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.774106 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.876603 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 10:46:16 crc kubenswrapper[4736]: I0214 10:46:16.896679 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.139234 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.337798 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.431735 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.490350 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.564573 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.627057 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 10:46:17 crc kubenswrapper[4736]: I0214 10:46:17.935926 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.035897 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.065919 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.126653 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.138604 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.176333 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.209869 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.220210 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.248363 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.276615 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.742906 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.868603 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 10:46:18 crc kubenswrapper[4736]: I0214 10:46:18.885379 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.116754 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.337630 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.355433 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.397493 4736 scope.go:117] "RemoveContainer" containerID="dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.492631 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.570257 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.672325 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.694448 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.756477 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.775181 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.950767 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.983453 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 10:46:19 crc kubenswrapper[4736]: I0214 10:46:19.985635 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.009278 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.107928 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.137779 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.142430 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.213571 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.332348 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.332433 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75"} Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.406768 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.497457 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.498657 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.569511 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.624437 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.650787 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.655268 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.673680 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.700611 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.735441 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.774490 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.843212 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.861394 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 10:46:20 crc kubenswrapper[4736]: I0214 10:46:20.902125 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.042900 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.176367 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.196521 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.219044 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.231642 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.325527 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.330905 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341109 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341693 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341710 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341795 4736 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75" exitCode=255 Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341834 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75"} Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.341876 4736 scope.go:117] "RemoveContainer" containerID="dac00d10ccaca813e9666920f84275e37cf64af825f82ea5422ae7b52322112d" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.342695 4736 scope.go:117] "RemoveContainer" containerID="763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75" Feb 14 10:46:21 crc kubenswrapper[4736]: E0214 10:46:21.343206 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.368341 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.517652 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.612823 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.624192 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.725284 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.740969 4736 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 10:46:21 crc kubenswrapper[4736]: I0214 10:46:21.874127 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.027778 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.048672 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.084668 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.118165 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.192460 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.264423 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.347458 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.380083 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.510760 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.525809 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.561170 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.613790 4736 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.620952 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.641132 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.667933 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.800523 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.871167 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.872100 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 10:46:22 crc kubenswrapper[4736]: I0214 10:46:22.993881 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.096520 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.209498 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.293212 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.398005 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.509463 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.663580 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.663581 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.673092 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.748715 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.790589 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.869544 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:46:23 crc kubenswrapper[4736]: I0214 10:46:23.993766 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.010428 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.042755 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.047824 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.078239 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.277217 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.291802 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.341993 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.343082 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.344875 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.366531 4736 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.404387 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.424968 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.478938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.582770 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.679617 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.813916 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.902178 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 10:46:24 crc kubenswrapper[4736]: I0214 10:46:24.977462 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.054179 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.127409 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.193617 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.254061 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.272419 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.278246 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.306806 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.334027 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.368409 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.436853 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.584538 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.588738 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.656598 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.692413 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.744373 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.749326 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.779934 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.807532 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.820927 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.821848 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.846100 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.846572 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.868845 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.870268 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.870795 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.917701 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 10:46:25 crc kubenswrapper[4736]: I0214 10:46:25.927105 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.015273 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.053464 4736 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.133867 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.161989 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.171882 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.237284 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.241339 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.265331 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.314174 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.468891 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.550143 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.594038 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.638070 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.721027 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.821265 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.929854 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.936805 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.941215 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 10:46:26 crc kubenswrapper[4736]: I0214 10:46:26.953984 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.024247 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.062091 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.101581 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.126957 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.286232 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.338312 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.344535 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.424599 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.449053 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.456422 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.565281 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.616308 4736 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.617114 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" podStartSLOduration=52.617095881 podStartE2EDuration="52.617095881s" podCreationTimestamp="2026-02-14 10:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:05.256949971 +0000 UTC m=+275.625577349" watchObservedRunningTime="2026-02-14 10:46:27.617095881 +0000 UTC m=+297.985723259" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.617224 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" podStartSLOduration=52.617220925 podStartE2EDuration="52.617220925s" podCreationTimestamp="2026-02-14 10:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:05.285442732 +0000 UTC m=+275.654070120" watchObservedRunningTime="2026-02-14 10:46:27.617220925 +0000 UTC m=+297.985848303" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.617933 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.617926527 podStartE2EDuration="44.617926527s" podCreationTimestamp="2026-02-14 10:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:05.333245477 +0000 UTC m=+275.701872855" watchObservedRunningTime="2026-02-14 10:46:27.617926527 +0000 UTC m=+297.986553895" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.619073 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9hq7" podStartSLOduration=48.654061635 podStartE2EDuration="2m5.619069093s" podCreationTimestamp="2026-02-14 10:44:22 +0000 UTC" firstStartedPulling="2026-02-14 10:44:24.140973981 +0000 UTC m=+174.509601349" lastFinishedPulling="2026-02-14 10:45:41.105981439 +0000 UTC m=+251.474608807" observedRunningTime="2026-02-14 10:46:05.347701679 +0000 UTC m=+275.716329047" watchObservedRunningTime="2026-02-14 10:46:27.619069093 +0000 UTC m=+297.987696461" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.620711 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-7hmxn"] Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.620783 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-676447d85b-s8bzv","openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 10:46:27 crc kubenswrapper[4736]: E0214 10:46:27.621224 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" containerName="installer" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621245 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" containerName="installer" Feb 14 10:46:27 crc kubenswrapper[4736]: E0214 10:46:27.621261 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446f17e4-455e-45ae-affc-f27215421058" containerName="oauth-openshift" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621267 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="446f17e4-455e-45ae-affc-f27215421058" containerName="oauth-openshift" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621345 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="446f17e4-455e-45ae-affc-f27215421058" containerName="oauth-openshift" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621355 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e745f80a-00b6-4114-8b93-60a2471d6622" containerName="installer" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621438 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621454 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d2e3f028-461a-48ef-97b6-77ac14e74487" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.621652 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.625543 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.628177 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.629493 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.633255 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.633310 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.634153 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.634183 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.634489 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.635736 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.635931 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.636661 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.636713 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.636983 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.643250 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.643610 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.652824 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.657675 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.668406 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.668391005 podStartE2EDuration="22.668391005s" podCreationTimestamp="2026-02-14 10:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:27.666408933 +0000 UTC m=+298.035036301" watchObservedRunningTime="2026-02-14 10:46:27.668391005 +0000 UTC m=+298.037018373" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.697236 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.724757 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.741145 4736 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.741330 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40" gracePeriod=5 Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810819 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810851 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810893 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-dir\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810930 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-session\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.810970 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-login\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811048 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-policies\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811105 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811147 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811190 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811233 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchpj\" (UniqueName: \"kubernetes.io/projected/aa0423b9-35f9-4d54-a1a0-46153402d7aa-kube-api-access-zchpj\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811293 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.811324 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-error\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.815725 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.856370 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.863004 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.889923 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.904923 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912492 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912544 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-dir\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912563 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-session\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-login\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912600 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-policies\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912618 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912638 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912658 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912681 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchpj\" (UniqueName: \"kubernetes.io/projected/aa0423b9-35f9-4d54-a1a0-46153402d7aa-kube-api-access-zchpj\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912706 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-error\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.912945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.914470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.915070 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-dir\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.915690 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-audit-policies\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.916542 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.917258 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.918893 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-error\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.923664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.924254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-session\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.924457 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.928908 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-login\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.930362 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.935137 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.937235 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa0423b9-35f9-4d54-a1a0-46153402d7aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.945005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchpj\" (UniqueName: \"kubernetes.io/projected/aa0423b9-35f9-4d54-a1a0-46153402d7aa-kube-api-access-zchpj\") pod \"oauth-openshift-676447d85b-s8bzv\" (UID: \"aa0423b9-35f9-4d54-a1a0-46153402d7aa\") " pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.963564 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 10:46:27 crc kubenswrapper[4736]: I0214 10:46:27.990336 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.016040 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.062264 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.169688 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.170016 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.237712 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.238914 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.359978 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.403393 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446f17e4-455e-45ae-affc-f27215421058" path="/var/lib/kubelet/pods/446f17e4-455e-45ae-affc-f27215421058/volumes" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.518080 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.545385 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.646441 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-676447d85b-s8bzv"] Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.660829 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.703674 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 10:46:28 crc kubenswrapper[4736]: I0214 10:46:28.884221 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.013967 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.142809 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.240379 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.324245 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.360923 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.388521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" event={"ID":"aa0423b9-35f9-4d54-a1a0-46153402d7aa","Type":"ContainerStarted","Data":"6c36bad4b7526368f9c3e1999e638227d5ac1a62b743798b305d33aa788ec319"} Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.388573 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" event={"ID":"aa0423b9-35f9-4d54-a1a0-46153402d7aa","Type":"ContainerStarted","Data":"317dadaa1b655cc97f1e8d0c78c54cd509c51c95d3643e4eb018b2754efa70c0"} Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.389343 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.412271 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.416332 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" podStartSLOduration=70.416312122 podStartE2EDuration="1m10.416312122s" podCreationTimestamp="2026-02-14 10:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:29.41401136 +0000 UTC m=+299.782638818" watchObservedRunningTime="2026-02-14 10:46:29.416312122 +0000 UTC m=+299.784939490" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.689877 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.722972 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-676447d85b-s8bzv" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.863670 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.892904 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 10:46:29 crc kubenswrapper[4736]: I0214 10:46:29.956510 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.072140 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.082007 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.134342 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.159455 4736 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.277020 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.412479 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.457986 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.590760 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.598320 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.655613 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.868123 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 10:46:30 crc kubenswrapper[4736]: I0214 10:46:30.887462 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:46:31 crc kubenswrapper[4736]: I0214 10:46:31.511553 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 10:46:31 crc kubenswrapper[4736]: I0214 10:46:31.615621 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.345224 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.345562 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396648 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396735 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396875 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396952 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.396972 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397015 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397095 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397274 4736 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397289 4736 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397300 4736 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397311 4736 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.397915 4736 scope.go:117] "RemoveContainer" containerID="763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75" Feb 14 10:46:33 crc kubenswrapper[4736]: E0214 10:46:33.398370 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.405108 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.423874 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.423942 4736 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40" exitCode=137 Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.423989 4736 scope.go:117] "RemoveContainer" containerID="9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.424114 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.453669 4736 scope.go:117] "RemoveContainer" containerID="9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40" Feb 14 10:46:33 crc kubenswrapper[4736]: E0214 10:46:33.454849 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40\": container with ID starting with 9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40 not found: ID does not exist" containerID="9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.454889 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40"} err="failed to get container status \"9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40\": rpc error: code = NotFound desc = could not find container \"9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40\": container with ID starting with 9f2c23ac1f2e4b761867f90ce82b50b522a6b65639500723698e1d1125c4cc40 not found: ID does not exist" Feb 14 10:46:33 crc kubenswrapper[4736]: I0214 10:46:33.498374 4736 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.403522 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.403843 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.413333 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.413374 4736 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5a66cf93-c071-4ec6-863e-744a5a164bdc" Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.418180 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 10:46:34 crc kubenswrapper[4736]: I0214 10:46:34.418211 4736 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5a66cf93-c071-4ec6-863e-744a5a164bdc" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.331678 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.332008 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" containerName="controller-manager" containerID="cri-o://2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026" gracePeriod=30 Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.423270 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.423503 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" containerName="route-controller-manager" containerID="cri-o://0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd" gracePeriod=30 Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.734298 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.819670 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.826989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xp5\" (UniqueName: \"kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5\") pod \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827030 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert\") pod \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827101 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca\") pod \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827129 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles\") pod \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827163 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmwqn\" (UniqueName: \"kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn\") pod \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827197 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert\") pod \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827248 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca\") pod \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827283 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config\") pod \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\" (UID: \"afe465cf-1a68-44cd-9f65-ffabb7ab311e\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.827310 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config\") pod \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\" (UID: \"27f2aaf5-5d63-4892-9e32-0537daa7cf1f\") " Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.828370 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config" (OuterVolumeSpecName: "config") pod "27f2aaf5-5d63-4892-9e32-0537daa7cf1f" (UID: "27f2aaf5-5d63-4892-9e32-0537daa7cf1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.829778 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca" (OuterVolumeSpecName: "client-ca") pod "27f2aaf5-5d63-4892-9e32-0537daa7cf1f" (UID: "27f2aaf5-5d63-4892-9e32-0537daa7cf1f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.830258 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "27f2aaf5-5d63-4892-9e32-0537daa7cf1f" (UID: "27f2aaf5-5d63-4892-9e32-0537daa7cf1f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.830907 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca" (OuterVolumeSpecName: "client-ca") pod "afe465cf-1a68-44cd-9f65-ffabb7ab311e" (UID: "afe465cf-1a68-44cd-9f65-ffabb7ab311e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.831441 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config" (OuterVolumeSpecName: "config") pod "afe465cf-1a68-44cd-9f65-ffabb7ab311e" (UID: "afe465cf-1a68-44cd-9f65-ffabb7ab311e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.833319 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5" (OuterVolumeSpecName: "kube-api-access-q2xp5") pod "afe465cf-1a68-44cd-9f65-ffabb7ab311e" (UID: "afe465cf-1a68-44cd-9f65-ffabb7ab311e"). InnerVolumeSpecName "kube-api-access-q2xp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.833888 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn" (OuterVolumeSpecName: "kube-api-access-zmwqn") pod "27f2aaf5-5d63-4892-9e32-0537daa7cf1f" (UID: "27f2aaf5-5d63-4892-9e32-0537daa7cf1f"). InnerVolumeSpecName "kube-api-access-zmwqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.834840 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afe465cf-1a68-44cd-9f65-ffabb7ab311e" (UID: "afe465cf-1a68-44cd-9f65-ffabb7ab311e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.843693 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27f2aaf5-5d63-4892-9e32-0537daa7cf1f" (UID: "27f2aaf5-5d63-4892-9e32-0537daa7cf1f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928553 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmwqn\" (UniqueName: \"kubernetes.io/projected/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-kube-api-access-zmwqn\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928592 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe465cf-1a68-44cd-9f65-ffabb7ab311e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928610 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928625 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe465cf-1a68-44cd-9f65-ffabb7ab311e-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928639 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928654 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2xp5\" (UniqueName: \"kubernetes.io/projected/afe465cf-1a68-44cd-9f65-ffabb7ab311e-kube-api-access-q2xp5\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928668 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928682 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:35 crc kubenswrapper[4736]: I0214 10:46:35.928696 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f2aaf5-5d63-4892-9e32-0537daa7cf1f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.441240 4736 generic.go:334] "Generic (PLEG): container finished" podID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" containerID="0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd" exitCode=0 Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.441313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" event={"ID":"afe465cf-1a68-44cd-9f65-ffabb7ab311e","Type":"ContainerDied","Data":"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd"} Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.441340 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" event={"ID":"afe465cf-1a68-44cd-9f65-ffabb7ab311e","Type":"ContainerDied","Data":"c122672d24f3b7cb62678a9d0272be6b94e999fc443a98769c0d0aec58b29b2a"} Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.441339 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.441355 4736 scope.go:117] "RemoveContainer" containerID="0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.444012 4736 generic.go:334] "Generic (PLEG): container finished" podID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" containerID="2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026" exitCode=0 Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.444064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" event={"ID":"27f2aaf5-5d63-4892-9e32-0537daa7cf1f","Type":"ContainerDied","Data":"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026"} Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.444109 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" event={"ID":"27f2aaf5-5d63-4892-9e32-0537daa7cf1f","Type":"ContainerDied","Data":"a3fd5ca7f14eb77f48f667cecb3615caa863a86695b602c96776dfe4b933e42c"} Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.444215 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56856844c4-9gkzt" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.490570 4736 scope.go:117] "RemoveContainer" containerID="0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd" Feb 14 10:46:36 crc kubenswrapper[4736]: E0214 10:46:36.491017 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd\": container with ID starting with 0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd not found: ID does not exist" containerID="0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.491244 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd"} err="failed to get container status \"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd\": rpc error: code = NotFound desc = could not find container \"0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd\": container with ID starting with 0eee8b06ce3f922d35b4d73230eef41662e896eb21502464a8734e3dd572c1bd not found: ID does not exist" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.491287 4736 scope.go:117] "RemoveContainer" containerID="2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.502998 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.503046 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd8c5b8b-kftv6"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.517867 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.521252 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56856844c4-9gkzt"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.522125 4736 scope.go:117] "RemoveContainer" containerID="2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026" Feb 14 10:46:36 crc kubenswrapper[4736]: E0214 10:46:36.522613 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026\": container with ID starting with 2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026 not found: ID does not exist" containerID="2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.522647 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026"} err="failed to get container status \"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026\": rpc error: code = NotFound desc = could not find container \"2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026\": container with ID starting with 2e92552d7dc000bc73f6bc6363f8e5a4c8be43eff427d5401647e6c6abece026 not found: ID does not exist" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872273 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:36 crc kubenswrapper[4736]: E0214 10:46:36.872521 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" containerName="route-controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872545 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" containerName="route-controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: E0214 10:46:36.872565 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872578 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 10:46:36 crc kubenswrapper[4736]: E0214 10:46:36.872605 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" containerName="controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872614 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" containerName="controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872801 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" containerName="controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872828 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.872853 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" containerName="route-controller-manager" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.873438 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.880838 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.882506 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.883083 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.883471 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.883788 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.884184 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.885991 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.887255 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.887404 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.888271 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.888482 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.888655 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.888966 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.892058 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.901456 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.904333 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:46:36 crc kubenswrapper[4736]: I0214 10:46:36.907042 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.047966 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048029 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048068 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048601 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048688 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxglm\" (UniqueName: \"kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048818 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048898 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqb8\" (UniqueName: \"kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.048973 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149492 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxglm\" (UniqueName: \"kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149605 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149634 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqb8\" (UniqueName: \"kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149685 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149725 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149778 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149837 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.149872 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.151194 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.152462 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.152462 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.152740 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.153482 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.155442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.158365 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.174103 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqb8\" (UniqueName: \"kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8\") pod \"route-controller-manager-69f5dd6f66-nhgw8\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.176298 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxglm\" (UniqueName: \"kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm\") pod \"controller-manager-ccddb95cd-gbr2j\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.208541 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.221895 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.685139 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:37 crc kubenswrapper[4736]: I0214 10:46:37.746735 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:37 crc kubenswrapper[4736]: W0214 10:46:37.758661 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2db0982_4710_4a9f_bc17_5752b8fc8f5c.slice/crio-bdca443bec17f1283a24333a6aefad10ce0fe1f1a9566d74ee3e99c56de1922b WatchSource:0}: Error finding container bdca443bec17f1283a24333a6aefad10ce0fe1f1a9566d74ee3e99c56de1922b: Status 404 returned error can't find the container with id bdca443bec17f1283a24333a6aefad10ce0fe1f1a9566d74ee3e99c56de1922b Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.409962 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f2aaf5-5d63-4892-9e32-0537daa7cf1f" path="/var/lib/kubelet/pods/27f2aaf5-5d63-4892-9e32-0537daa7cf1f/volumes" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.410733 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe465cf-1a68-44cd-9f65-ffabb7ab311e" path="/var/lib/kubelet/pods/afe465cf-1a68-44cd-9f65-ffabb7ab311e/volumes" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.458769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" event={"ID":"86953a53-d4d3-41ca-a7cb-6bfe79da6854","Type":"ContainerStarted","Data":"cefe502ad6de0d65c57bc28f6968f1c9849d3db2d26b6c00997b4f8465bdb568"} Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.458816 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" event={"ID":"86953a53-d4d3-41ca-a7cb-6bfe79da6854","Type":"ContainerStarted","Data":"7b2e118552ca4cfbb17fe9ea037e91216a626c4ccac1db8b047f49b4b7eba25e"} Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.459886 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.461082 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" event={"ID":"a2db0982-4710-4a9f-bc17-5752b8fc8f5c","Type":"ContainerStarted","Data":"056c79e6d09a08cfde05110fba6b447983a66063e72b2985197eadf2bdf28e56"} Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.461106 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" event={"ID":"a2db0982-4710-4a9f-bc17-5752b8fc8f5c","Type":"ContainerStarted","Data":"bdca443bec17f1283a24333a6aefad10ce0fe1f1a9566d74ee3e99c56de1922b"} Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.461391 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.465005 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.467348 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.478521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" podStartSLOduration=3.478506888 podStartE2EDuration="3.478506888s" podCreationTimestamp="2026-02-14 10:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:38.476201876 +0000 UTC m=+308.844829254" watchObservedRunningTime="2026-02-14 10:46:38.478506888 +0000 UTC m=+308.847134256" Feb 14 10:46:38 crc kubenswrapper[4736]: I0214 10:46:38.496090 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" podStartSLOduration=3.496073708 podStartE2EDuration="3.496073708s" podCreationTimestamp="2026-02-14 10:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:38.495337975 +0000 UTC m=+308.863965353" watchObservedRunningTime="2026-02-14 10:46:38.496073708 +0000 UTC m=+308.864701076" Feb 14 10:46:41 crc kubenswrapper[4736]: I0214 10:46:41.469325 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 10:46:45 crc kubenswrapper[4736]: I0214 10:46:45.396865 4736 scope.go:117] "RemoveContainer" containerID="763ae56c0d51a9336ac5ac4901380e14e8983f7111307fe9ada3e77311c7ec75" Feb 14 10:46:46 crc kubenswrapper[4736]: I0214 10:46:46.505080 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Feb 14 10:46:46 crc kubenswrapper[4736]: I0214 10:46:46.505787 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dea682d1143a59431c05d91080c975205c5b83320fa65179310e5e4b89d16fe7"} Feb 14 10:46:49 crc kubenswrapper[4736]: I0214 10:46:49.520912 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerID="93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e" exitCode=0 Feb 14 10:46:49 crc kubenswrapper[4736]: I0214 10:46:49.521101 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerDied","Data":"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e"} Feb 14 10:46:49 crc kubenswrapper[4736]: I0214 10:46:49.521509 4736 scope.go:117] "RemoveContainer" containerID="93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e" Feb 14 10:46:50 crc kubenswrapper[4736]: I0214 10:46:50.532999 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerStarted","Data":"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995"} Feb 14 10:46:50 crc kubenswrapper[4736]: I0214 10:46:50.534461 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:46:50 crc kubenswrapper[4736]: I0214 10:46:50.536480 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.313201 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.313610 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" podUID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" containerName="controller-manager" containerID="cri-o://056c79e6d09a08cfde05110fba6b447983a66063e72b2985197eadf2bdf28e56" gracePeriod=30 Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.354586 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.355016 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" podUID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" containerName="route-controller-manager" containerID="cri-o://cefe502ad6de0d65c57bc28f6968f1c9849d3db2d26b6c00997b4f8465bdb568" gracePeriod=30 Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.560294 4736 generic.go:334] "Generic (PLEG): container finished" podID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" containerID="056c79e6d09a08cfde05110fba6b447983a66063e72b2985197eadf2bdf28e56" exitCode=0 Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.560385 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" event={"ID":"a2db0982-4710-4a9f-bc17-5752b8fc8f5c","Type":"ContainerDied","Data":"056c79e6d09a08cfde05110fba6b447983a66063e72b2985197eadf2bdf28e56"} Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.562877 4736 generic.go:334] "Generic (PLEG): container finished" podID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" containerID="cefe502ad6de0d65c57bc28f6968f1c9849d3db2d26b6c00997b4f8465bdb568" exitCode=0 Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.562917 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" event={"ID":"86953a53-d4d3-41ca-a7cb-6bfe79da6854","Type":"ContainerDied","Data":"cefe502ad6de0d65c57bc28f6968f1c9849d3db2d26b6c00997b4f8465bdb568"} Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.891876 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:55 crc kubenswrapper[4736]: I0214 10:46:55.895597 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006459 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca\") pod \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006522 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config\") pod \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006560 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles\") pod \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert\") pod \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006642 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqb8\" (UniqueName: \"kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8\") pod \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006682 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca\") pod \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.006757 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert\") pod \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.007243 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a2db0982-4710-4a9f-bc17-5752b8fc8f5c" (UID: "a2db0982-4710-4a9f-bc17-5752b8fc8f5c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.007291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a2db0982-4710-4a9f-bc17-5752b8fc8f5c" (UID: "a2db0982-4710-4a9f-bc17-5752b8fc8f5c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.007323 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config" (OuterVolumeSpecName: "config") pod "a2db0982-4710-4a9f-bc17-5752b8fc8f5c" (UID: "a2db0982-4710-4a9f-bc17-5752b8fc8f5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.007763 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxglm\" (UniqueName: \"kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm\") pod \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\" (UID: \"a2db0982-4710-4a9f-bc17-5752b8fc8f5c\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.007799 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config\") pod \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\" (UID: \"86953a53-d4d3-41ca-a7cb-6bfe79da6854\") " Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.008053 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.008068 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.008079 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.008179 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca" (OuterVolumeSpecName: "client-ca") pod "86953a53-d4d3-41ca-a7cb-6bfe79da6854" (UID: "86953a53-d4d3-41ca-a7cb-6bfe79da6854"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.008423 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config" (OuterVolumeSpecName: "config") pod "86953a53-d4d3-41ca-a7cb-6bfe79da6854" (UID: "86953a53-d4d3-41ca-a7cb-6bfe79da6854"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.012315 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86953a53-d4d3-41ca-a7cb-6bfe79da6854" (UID: "86953a53-d4d3-41ca-a7cb-6bfe79da6854"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.012577 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8" (OuterVolumeSpecName: "kube-api-access-zkqb8") pod "86953a53-d4d3-41ca-a7cb-6bfe79da6854" (UID: "86953a53-d4d3-41ca-a7cb-6bfe79da6854"). InnerVolumeSpecName "kube-api-access-zkqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.013228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2db0982-4710-4a9f-bc17-5752b8fc8f5c" (UID: "a2db0982-4710-4a9f-bc17-5752b8fc8f5c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.017391 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm" (OuterVolumeSpecName: "kube-api-access-gxglm") pod "a2db0982-4710-4a9f-bc17-5752b8fc8f5c" (UID: "a2db0982-4710-4a9f-bc17-5752b8fc8f5c"). InnerVolumeSpecName "kube-api-access-gxglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109499 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109801 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxglm\" (UniqueName: \"kubernetes.io/projected/a2db0982-4710-4a9f-bc17-5752b8fc8f5c-kube-api-access-gxglm\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109819 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109834 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86953a53-d4d3-41ca-a7cb-6bfe79da6854-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109846 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86953a53-d4d3-41ca-a7cb-6bfe79da6854-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.109859 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkqb8\" (UniqueName: \"kubernetes.io/projected/86953a53-d4d3-41ca-a7cb-6bfe79da6854-kube-api-access-zkqb8\") on node \"crc\" DevicePath \"\"" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.571399 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" event={"ID":"86953a53-d4d3-41ca-a7cb-6bfe79da6854","Type":"ContainerDied","Data":"7b2e118552ca4cfbb17fe9ea037e91216a626c4ccac1db8b047f49b4b7eba25e"} Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.571457 4736 scope.go:117] "RemoveContainer" containerID="cefe502ad6de0d65c57bc28f6968f1c9849d3db2d26b6c00997b4f8465bdb568" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.573460 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" event={"ID":"a2db0982-4710-4a9f-bc17-5752b8fc8f5c","Type":"ContainerDied","Data":"bdca443bec17f1283a24333a6aefad10ce0fe1f1a9566d74ee3e99c56de1922b"} Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.573544 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-gbr2j" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.575036 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.597409 4736 scope.go:117] "RemoveContainer" containerID="056c79e6d09a08cfde05110fba6b447983a66063e72b2985197eadf2bdf28e56" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.602190 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.614991 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-gbr2j"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.620718 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.625279 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-nhgw8"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.887230 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:46:56 crc kubenswrapper[4736]: E0214 10:46:56.887537 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" containerName="route-controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.887557 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" containerName="route-controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: E0214 10:46:56.887580 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" containerName="controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.887590 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" containerName="controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.887727 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" containerName="route-controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.887769 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" containerName="controller-manager" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.888274 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.891543 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.891718 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.892009 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.892561 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.892828 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.899583 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.900080 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.901239 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.902519 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.903646 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.906330 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.913197 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.913423 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.913461 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.913533 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.913653 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:46:56 crc kubenswrapper[4736]: I0214 10:46:56.921936 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.025473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.025724 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqnv\" (UniqueName: \"kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.025938 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026064 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026197 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026307 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026418 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026544 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.026652 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlp8\" (UniqueName: \"kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127356 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlp8\" (UniqueName: \"kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127450 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127474 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqnv\" (UniqueName: \"kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127532 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127555 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127619 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.127640 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.130253 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.130504 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.130868 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.131698 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.132035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.134847 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.136457 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.151635 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqnv\" (UniqueName: \"kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv\") pod \"controller-manager-d48c458cb-k46p9\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.169792 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlp8\" (UniqueName: \"kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8\") pod \"route-controller-manager-d5b76bdb6-dmztz\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.214380 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.241900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.731994 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:46:57 crc kubenswrapper[4736]: W0214 10:46:57.734258 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41ec618_b033_4bb3_a7ac_e7ce322c1aa6.slice/crio-f3f30a78a565bbce6d73e896619ad8208489f3c5ffac4dd765eeeca7815edd0f WatchSource:0}: Error finding container f3f30a78a565bbce6d73e896619ad8208489f3c5ffac4dd765eeeca7815edd0f: Status 404 returned error can't find the container with id f3f30a78a565bbce6d73e896619ad8208489f3c5ffac4dd765eeeca7815edd0f Feb 14 10:46:57 crc kubenswrapper[4736]: I0214 10:46:57.790556 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.404709 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86953a53-d4d3-41ca-a7cb-6bfe79da6854" path="/var/lib/kubelet/pods/86953a53-d4d3-41ca-a7cb-6bfe79da6854/volumes" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.405488 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2db0982-4710-4a9f-bc17-5752b8fc8f5c" path="/var/lib/kubelet/pods/a2db0982-4710-4a9f-bc17-5752b8fc8f5c/volumes" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.587386 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" event={"ID":"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6","Type":"ContainerStarted","Data":"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a"} Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.587434 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" event={"ID":"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6","Type":"ContainerStarted","Data":"f3f30a78a565bbce6d73e896619ad8208489f3c5ffac4dd765eeeca7815edd0f"} Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.587716 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.590196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" event={"ID":"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac","Type":"ContainerStarted","Data":"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022"} Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.590241 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" event={"ID":"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac","Type":"ContainerStarted","Data":"79328277bd275176ab8d0f24dc18d6dfdcf8c17063a68acdfd11527bb11c197f"} Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.590434 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.595049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.597157 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.604417 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" podStartSLOduration=3.6044044729999998 podStartE2EDuration="3.604404473s" podCreationTimestamp="2026-02-14 10:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:58.601884894 +0000 UTC m=+328.970512292" watchObservedRunningTime="2026-02-14 10:46:58.604404473 +0000 UTC m=+328.973031841" Feb 14 10:46:58 crc kubenswrapper[4736]: I0214 10:46:58.647878 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" podStartSLOduration=3.647863642 podStartE2EDuration="3.647863642s" podCreationTimestamp="2026-02-14 10:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:46:58.64364211 +0000 UTC m=+329.012269488" watchObservedRunningTime="2026-02-14 10:46:58.647863642 +0000 UTC m=+329.016491010" Feb 14 10:47:09 crc kubenswrapper[4736]: I0214 10:47:09.675235 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.184936 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.186049 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9hq7" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="registry-server" containerID="cri-o://499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" gracePeriod=2 Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.530515 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 is running failed: container process not found" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.530906 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 is running failed: container process not found" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.531136 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 is running failed: container process not found" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.531217 4736 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-n9hq7" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="registry-server" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.698634 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.740114 4736 generic.go:334] "Generic (PLEG): container finished" podID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" exitCode=0 Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.740207 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hq7" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.740219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerDied","Data":"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272"} Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.740557 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hq7" event={"ID":"2ea41fdf-923c-4ec9-b482-a53e54045056","Type":"ContainerDied","Data":"3f57b9a7f0bf72b3457e8476a1210d2e4b3e620d938bf4edbf3f6a4a4954293d"} Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.740576 4736 scope.go:117] "RemoveContainer" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.756704 4736 scope.go:117] "RemoveContainer" containerID="7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.773269 4736 scope.go:117] "RemoveContainer" containerID="61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.785082 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities\") pod \"2ea41fdf-923c-4ec9-b482-a53e54045056\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.785307 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content\") pod \"2ea41fdf-923c-4ec9-b482-a53e54045056\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.785510 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdh5m\" (UniqueName: \"kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m\") pod \"2ea41fdf-923c-4ec9-b482-a53e54045056\" (UID: \"2ea41fdf-923c-4ec9-b482-a53e54045056\") " Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.788148 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities" (OuterVolumeSpecName: "utilities") pod "2ea41fdf-923c-4ec9-b482-a53e54045056" (UID: "2ea41fdf-923c-4ec9-b482-a53e54045056"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.789021 4736 scope.go:117] "RemoveContainer" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.792180 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272\": container with ID starting with 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 not found: ID does not exist" containerID="499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.792223 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272"} err="failed to get container status \"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272\": rpc error: code = NotFound desc = could not find container \"499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272\": container with ID starting with 499aa8bd8d0bac70f3e29080466829c4f6254034f04558b897251ad053776272 not found: ID does not exist" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.792253 4736 scope.go:117] "RemoveContainer" containerID="7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61" Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.792472 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61\": container with ID starting with 7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61 not found: ID does not exist" containerID="7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.792500 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61"} err="failed to get container status \"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61\": rpc error: code = NotFound desc = could not find container \"7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61\": container with ID starting with 7bac4913a31617d0c9627592ed0479cff4fdadf7894fa70faf56df10979b0d61 not found: ID does not exist" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.792517 4736 scope.go:117] "RemoveContainer" containerID="61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920" Feb 14 10:47:22 crc kubenswrapper[4736]: E0214 10:47:22.792822 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920\": container with ID starting with 61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920 not found: ID does not exist" containerID="61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.792851 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920"} err="failed to get container status \"61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920\": rpc error: code = NotFound desc = could not find container \"61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920\": container with ID starting with 61a5ed8e923cf821aa67d22db806e8a3b1374a2c1a0d12e2d7d8aa98a21d0920 not found: ID does not exist" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.793084 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m" (OuterVolumeSpecName: "kube-api-access-pdh5m") pod "2ea41fdf-923c-4ec9-b482-a53e54045056" (UID: "2ea41fdf-923c-4ec9-b482-a53e54045056"). InnerVolumeSpecName "kube-api-access-pdh5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.887291 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.887366 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdh5m\" (UniqueName: \"kubernetes.io/projected/2ea41fdf-923c-4ec9-b482-a53e54045056-kube-api-access-pdh5m\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.909838 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ea41fdf-923c-4ec9-b482-a53e54045056" (UID: "2ea41fdf-923c-4ec9-b482-a53e54045056"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:47:22 crc kubenswrapper[4736]: I0214 10:47:22.989415 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea41fdf-923c-4ec9-b482-a53e54045056-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:23 crc kubenswrapper[4736]: I0214 10:47:23.066395 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:47:23 crc kubenswrapper[4736]: I0214 10:47:23.071192 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9hq7"] Feb 14 10:47:24 crc kubenswrapper[4736]: I0214 10:47:24.412634 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" path="/var/lib/kubelet/pods/2ea41fdf-923c-4ec9-b482-a53e54045056/volumes" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.344402 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.345182 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" podUID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" containerName="controller-manager" containerID="cri-o://28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a" gracePeriod=30 Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.756295 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.821471 4736 generic.go:334] "Generic (PLEG): container finished" podID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" containerID="28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a" exitCode=0 Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.821572 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" event={"ID":"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6","Type":"ContainerDied","Data":"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a"} Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.821616 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" event={"ID":"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6","Type":"ContainerDied","Data":"f3f30a78a565bbce6d73e896619ad8208489f3c5ffac4dd765eeeca7815edd0f"} Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.821647 4736 scope.go:117] "RemoveContainer" containerID="28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.821849 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-k46p9" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.844532 4736 scope.go:117] "RemoveContainer" containerID="28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a" Feb 14 10:47:35 crc kubenswrapper[4736]: E0214 10:47:35.845213 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a\": container with ID starting with 28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a not found: ID does not exist" containerID="28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.845251 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a"} err="failed to get container status \"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a\": rpc error: code = NotFound desc = could not find container \"28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a\": container with ID starting with 28801505851b8998884db742cd9b27779ef55aeb725f9f6319139d79f8a0c34a not found: ID does not exist" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.855688 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca\") pod \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.855789 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert\") pod \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.855860 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config\") pod \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.855917 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles\") pod \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.855944 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcqnv\" (UniqueName: \"kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv\") pod \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\" (UID: \"e41ec618-b033-4bb3-a7ac-e7ce322c1aa6\") " Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.856871 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" (UID: "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.856923 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" (UID: "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.857201 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config" (OuterVolumeSpecName: "config") pod "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" (UID: "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.861322 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv" (OuterVolumeSpecName: "kube-api-access-zcqnv") pod "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" (UID: "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6"). InnerVolumeSpecName "kube-api-access-zcqnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.861613 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" (UID: "e41ec618-b033-4bb3-a7ac-e7ce322c1aa6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.956891 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.959629 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.962269 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcqnv\" (UniqueName: \"kubernetes.io/projected/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-kube-api-access-zcqnv\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.962395 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:35 crc kubenswrapper[4736]: I0214 10:47:35.962545 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.157376 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.164143 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-k46p9"] Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.405367 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" path="/var/lib/kubelet/pods/e41ec618-b033-4bb3-a7ac-e7ce322c1aa6/volumes" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916413 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-hnrgc"] Feb 14 10:47:36 crc kubenswrapper[4736]: E0214 10:47:36.916603 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="registry-server" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916614 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="registry-server" Feb 14 10:47:36 crc kubenswrapper[4736]: E0214 10:47:36.916625 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" containerName="controller-manager" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916630 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" containerName="controller-manager" Feb 14 10:47:36 crc kubenswrapper[4736]: E0214 10:47:36.916639 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="extract-content" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916645 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="extract-content" Feb 14 10:47:36 crc kubenswrapper[4736]: E0214 10:47:36.916655 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="extract-utilities" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916661 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="extract-utilities" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916761 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea41fdf-923c-4ec9-b482-a53e54045056" containerName="registry-server" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.916779 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41ec618-b033-4bb3-a7ac-e7ce322c1aa6" containerName="controller-manager" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.917089 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.919344 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.919772 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.921029 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.921560 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.921735 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.921985 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.935462 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.937435 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-hnrgc"] Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.973802 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsp9p\" (UniqueName: \"kubernetes.io/projected/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-kube-api-access-wsp9p\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.973867 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-client-ca\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.973896 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-serving-cert\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.973931 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:36 crc kubenswrapper[4736]: I0214 10:47:36.973949 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-config\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.075437 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.075507 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-config\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.075565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsp9p\" (UniqueName: \"kubernetes.io/projected/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-kube-api-access-wsp9p\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.075613 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-client-ca\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.075646 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-serving-cert\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.076478 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-client-ca\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.076764 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-proxy-ca-bundles\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.076816 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-config\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.079341 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-serving-cert\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.090679 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsp9p\" (UniqueName: \"kubernetes.io/projected/0b18fe5e-fe5b-4f3e-aa13-04ded26b4348-kube-api-access-wsp9p\") pod \"controller-manager-ccddb95cd-hnrgc\" (UID: \"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348\") " pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.230477 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.732834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccddb95cd-hnrgc"] Feb 14 10:47:37 crc kubenswrapper[4736]: I0214 10:47:37.834422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" event={"ID":"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348","Type":"ContainerStarted","Data":"91f864c250f4fb36d0fcd65f1aa5b35aa3e52112412706a948b45d74b5b82c84"} Feb 14 10:47:38 crc kubenswrapper[4736]: I0214 10:47:38.840658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" event={"ID":"0b18fe5e-fe5b-4f3e-aa13-04ded26b4348","Type":"ContainerStarted","Data":"8d4f447c1f1c49cfdc6a699b295541db2a78014b21c2956fee454b18fe31a3b6"} Feb 14 10:47:38 crc kubenswrapper[4736]: I0214 10:47:38.841834 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:38 crc kubenswrapper[4736]: I0214 10:47:38.846613 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" Feb 14 10:47:38 crc kubenswrapper[4736]: I0214 10:47:38.864221 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-ccddb95cd-hnrgc" podStartSLOduration=3.864195551 podStartE2EDuration="3.864195551s" podCreationTimestamp="2026-02-14 10:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:47:38.858515493 +0000 UTC m=+369.227142881" watchObservedRunningTime="2026-02-14 10:47:38.864195551 +0000 UTC m=+369.232822929" Feb 14 10:47:47 crc kubenswrapper[4736]: I0214 10:47:47.696220 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:47:47 crc kubenswrapper[4736]: I0214 10:47:47.696823 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:47:48 crc kubenswrapper[4736]: I0214 10:47:48.963601 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7jgth"] Feb 14 10:47:48 crc kubenswrapper[4736]: I0214 10:47:48.964809 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.003350 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7jgth"] Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127750 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-bound-sa-token\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e659a62-bcdf-4af3-8e03-53d921e020df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127834 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-trusted-ca\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127854 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e659a62-bcdf-4af3-8e03-53d921e020df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-tls\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.127959 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hkkp\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-kube-api-access-8hkkp\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.128021 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.128114 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-certificates\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.147404 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229632 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-tls\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hkkp\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-kube-api-access-8hkkp\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229737 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-certificates\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229815 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-bound-sa-token\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e659a62-bcdf-4af3-8e03-53d921e020df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229877 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-trusted-ca\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.229901 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e659a62-bcdf-4af3-8e03-53d921e020df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.230405 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e659a62-bcdf-4af3-8e03-53d921e020df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.231091 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-trusted-ca\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.231318 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-certificates\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.236064 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e659a62-bcdf-4af3-8e03-53d921e020df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.236433 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-registry-tls\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.251159 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hkkp\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-kube-api-access-8hkkp\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.252230 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e659a62-bcdf-4af3-8e03-53d921e020df-bound-sa-token\") pod \"image-registry-66df7c8f76-7jgth\" (UID: \"1e659a62-bcdf-4af3-8e03-53d921e020df\") " pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.287313 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.669837 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7jgth"] Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.895725 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" event={"ID":"1e659a62-bcdf-4af3-8e03-53d921e020df","Type":"ContainerStarted","Data":"8a3d0a96aeb165bca4b8c64cc31b3b347a72978e60c062fd8997c893566b29ea"} Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.896028 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.896045 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" event={"ID":"1e659a62-bcdf-4af3-8e03-53d921e020df","Type":"ContainerStarted","Data":"ed69c138144899b2194e2a9ce4d7b99c373ba641021bf1e13344078dd51a589b"} Feb 14 10:47:49 crc kubenswrapper[4736]: I0214 10:47:49.915069 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" podStartSLOduration=1.9150521299999999 podStartE2EDuration="1.91505213s" podCreationTimestamp="2026-02-14 10:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:47:49.912103167 +0000 UTC m=+380.280730545" watchObservedRunningTime="2026-02-14 10:47:49.91505213 +0000 UTC m=+380.283679508" Feb 14 10:48:09 crc kubenswrapper[4736]: I0214 10:48:09.293151 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7jgth" Feb 14 10:48:09 crc kubenswrapper[4736]: I0214 10:48:09.366893 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.105711 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.106602 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpgts" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="registry-server" containerID="cri-o://f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63" gracePeriod=30 Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.116283 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.116573 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kxsl4" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="registry-server" containerID="cri-o://c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e" gracePeriod=30 Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.122613 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.122803 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" containerID="cri-o://07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995" gracePeriod=30 Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.135327 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.140055 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kqrw8" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="registry-server" containerID="cri-o://faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425" gracePeriod=30 Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.141592 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.141868 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46g9b" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="registry-server" containerID="cri-o://bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a" gracePeriod=30 Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.157981 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v7kg4"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.158613 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.168624 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v7kg4"] Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.362517 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.371033 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.371115 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdnph\" (UniqueName: \"kubernetes.io/projected/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-kube-api-access-vdnph\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.472821 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.472878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.472954 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdnph\" (UniqueName: \"kubernetes.io/projected/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-kube-api-access-vdnph\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.474213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.486947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.501333 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdnph\" (UniqueName: \"kubernetes.io/projected/9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5-kube-api-access-vdnph\") pod \"marketplace-operator-79b997595-v7kg4\" (UID: \"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5\") " pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.558944 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.577955 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content\") pod \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.578013 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb97l\" (UniqueName: \"kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l\") pod \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.578093 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities\") pod \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\" (UID: \"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.581229 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities" (OuterVolumeSpecName: "utilities") pod "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" (UID: "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.583447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l" (OuterVolumeSpecName: "kube-api-access-sb97l") pod "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" (UID: "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f"). InnerVolumeSpecName "kube-api-access-sb97l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.651290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" (UID: "3f2feb07-1c8a-4c17-81a1-24f60ac3f31f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.679035 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.679066 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.679080 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb97l\" (UniqueName: \"kubernetes.io/projected/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f-kube-api-access-sb97l\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.752234 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.760901 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.777449 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.807314 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.809148 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880401 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rlgw\" (UniqueName: \"kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw\") pod \"d8991afa-da38-4dd2-9f58-cf895ec92784\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880464 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities\") pod \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics\") pod \"d8991afa-da38-4dd2-9f58-cf895ec92784\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880600 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca\") pod \"d8991afa-da38-4dd2-9f58-cf895ec92784\" (UID: \"d8991afa-da38-4dd2-9f58-cf895ec92784\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880622 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wwwt\" (UniqueName: \"kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt\") pod \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.880670 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content\") pod \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\" (UID: \"b71b0996-cb92-4faa-9245-95f7e9afb7fb\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.882866 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d8991afa-da38-4dd2-9f58-cf895ec92784" (UID: "d8991afa-da38-4dd2-9f58-cf895ec92784"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.882863 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities" (OuterVolumeSpecName: "utilities") pod "b71b0996-cb92-4faa-9245-95f7e9afb7fb" (UID: "b71b0996-cb92-4faa-9245-95f7e9afb7fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.885016 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt" (OuterVolumeSpecName: "kube-api-access-2wwwt") pod "b71b0996-cb92-4faa-9245-95f7e9afb7fb" (UID: "b71b0996-cb92-4faa-9245-95f7e9afb7fb"). InnerVolumeSpecName "kube-api-access-2wwwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.886598 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d8991afa-da38-4dd2-9f58-cf895ec92784" (UID: "d8991afa-da38-4dd2-9f58-cf895ec92784"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.887224 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw" (OuterVolumeSpecName: "kube-api-access-5rlgw") pod "d8991afa-da38-4dd2-9f58-cf895ec92784" (UID: "d8991afa-da38-4dd2-9f58-cf895ec92784"). InnerVolumeSpecName "kube-api-access-5rlgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.911379 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b71b0996-cb92-4faa-9245-95f7e9afb7fb" (UID: "b71b0996-cb92-4faa-9245-95f7e9afb7fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981499 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities\") pod \"d3d771cd-3ef9-44db-8981-3e8241e36f30\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981549 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content\") pod \"d3d771cd-3ef9-44db-8981-3e8241e36f30\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981579 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content\") pod \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981597 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrg7\" (UniqueName: \"kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7\") pod \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981627 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khqcw\" (UniqueName: \"kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw\") pod \"d3d771cd-3ef9-44db-8981-3e8241e36f30\" (UID: \"d3d771cd-3ef9-44db-8981-3e8241e36f30\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.981662 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities\") pod \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\" (UID: \"36c96a86-aadc-46d0-bca7-3d9fcca42ec3\") " Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982016 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982049 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8991afa-da38-4dd2-9f58-cf895ec92784-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982062 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wwwt\" (UniqueName: \"kubernetes.io/projected/b71b0996-cb92-4faa-9245-95f7e9afb7fb-kube-api-access-2wwwt\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982072 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982080 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rlgw\" (UniqueName: \"kubernetes.io/projected/d8991afa-da38-4dd2-9f58-cf895ec92784-kube-api-access-5rlgw\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982089 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b0996-cb92-4faa-9245-95f7e9afb7fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982509 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities" (OuterVolumeSpecName: "utilities") pod "d3d771cd-3ef9-44db-8981-3e8241e36f30" (UID: "d3d771cd-3ef9-44db-8981-3e8241e36f30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.982779 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities" (OuterVolumeSpecName: "utilities") pod "36c96a86-aadc-46d0-bca7-3d9fcca42ec3" (UID: "36c96a86-aadc-46d0-bca7-3d9fcca42ec3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.985251 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7" (OuterVolumeSpecName: "kube-api-access-jbrg7") pod "36c96a86-aadc-46d0-bca7-3d9fcca42ec3" (UID: "36c96a86-aadc-46d0-bca7-3d9fcca42ec3"). InnerVolumeSpecName "kube-api-access-jbrg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:11 crc kubenswrapper[4736]: I0214 10:48:11.985734 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw" (OuterVolumeSpecName: "kube-api-access-khqcw") pod "d3d771cd-3ef9-44db-8981-3e8241e36f30" (UID: "d3d771cd-3ef9-44db-8981-3e8241e36f30"). InnerVolumeSpecName "kube-api-access-khqcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.024907 4736 generic.go:334] "Generic (PLEG): container finished" podID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerID="bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a" exitCode=0 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.024976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerDied","Data":"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.025007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46g9b" event={"ID":"d3d771cd-3ef9-44db-8981-3e8241e36f30","Type":"ContainerDied","Data":"7dd29ef4205698f712b1cbe19be0ef0e9202394e63387ca1f455fe10fd0d9a1f"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.025026 4736 scope.go:117] "RemoveContainer" containerID="bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.025138 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46g9b" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.030344 4736 generic.go:334] "Generic (PLEG): container finished" podID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerID="faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425" exitCode=0 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.030394 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqrw8" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.030412 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerDied","Data":"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.030439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqrw8" event={"ID":"b71b0996-cb92-4faa-9245-95f7e9afb7fb","Type":"ContainerDied","Data":"ebbbeab1157075d0607cb662c9bfd1acce7b879efd9ee97de7185218cef1b4ec"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.034843 4736 generic.go:334] "Generic (PLEG): container finished" podID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerID="f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63" exitCode=0 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.034903 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpgts" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.034901 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerDied","Data":"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.034961 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpgts" event={"ID":"36c96a86-aadc-46d0-bca7-3d9fcca42ec3","Type":"ContainerDied","Data":"676f54631570ade4ddd54824621b70a3e5a06223a547e37a2cf83a4a3718dea9"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.037851 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerID="07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995" exitCode=0 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.037897 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.037918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerDied","Data":"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.037942 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v52bz" event={"ID":"d8991afa-da38-4dd2-9f58-cf895ec92784","Type":"ContainerDied","Data":"2ac9f8ec015628646fe4685b52327cd2c92ba3124c61d6b57eb8dda738ecc5cf"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.040268 4736 generic.go:334] "Generic (PLEG): container finished" podID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerID="c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e" exitCode=0 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.040308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerDied","Data":"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.040333 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxsl4" event={"ID":"3f2feb07-1c8a-4c17-81a1-24f60ac3f31f","Type":"ContainerDied","Data":"618b2f5e0904935178f746e6d4294337b3c59cd4e4eb69e8472d5e34999d539a"} Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.040416 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxsl4" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.045978 4736 scope.go:117] "RemoveContainer" containerID="fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.049642 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36c96a86-aadc-46d0-bca7-3d9fcca42ec3" (UID: "36c96a86-aadc-46d0-bca7-3d9fcca42ec3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.062819 4736 scope.go:117] "RemoveContainer" containerID="e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.085325 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.085788 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.085899 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrg7\" (UniqueName: \"kubernetes.io/projected/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-kube-api-access-jbrg7\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.086021 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khqcw\" (UniqueName: \"kubernetes.io/projected/d3d771cd-3ef9-44db-8981-3e8241e36f30-kube-api-access-khqcw\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.086116 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c96a86-aadc-46d0-bca7-3d9fcca42ec3-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.088564 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.093489 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v52bz"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.097823 4736 scope.go:117] "RemoveContainer" containerID="bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.098207 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.098272 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a\": container with ID starting with bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a not found: ID does not exist" containerID="bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.098334 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a"} err="failed to get container status \"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a\": rpc error: code = NotFound desc = could not find container \"bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a\": container with ID starting with bd4be801b793447573a7505c4149a2c0b38c346fd3d14f7a738cd8da871c217a not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.098370 4736 scope.go:117] "RemoveContainer" containerID="fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.098889 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f\": container with ID starting with fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f not found: ID does not exist" containerID="fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.098927 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f"} err="failed to get container status \"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f\": rpc error: code = NotFound desc = could not find container \"fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f\": container with ID starting with fc5cd7cf724ca81be4867640426bddfc7cf24110ef911899bd6e005c9fd92f3f not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.098990 4736 scope.go:117] "RemoveContainer" containerID="e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.099362 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543\": container with ID starting with e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543 not found: ID does not exist" containerID="e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.099383 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543"} err="failed to get container status \"e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543\": rpc error: code = NotFound desc = could not find container \"e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543\": container with ID starting with e83a385b9f0b47764f5916ad786af68a032ed0e54ddf26f12893a82a44bcc543 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.099397 4736 scope.go:117] "RemoveContainer" containerID="faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.108039 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqrw8"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.115113 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.116285 4736 scope.go:117] "RemoveContainer" containerID="7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.118738 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kxsl4"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.127626 4736 scope.go:117] "RemoveContainer" containerID="6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.138199 4736 scope.go:117] "RemoveContainer" containerID="faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.138604 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425\": container with ID starting with faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425 not found: ID does not exist" containerID="faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.138643 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425"} err="failed to get container status \"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425\": rpc error: code = NotFound desc = could not find container \"faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425\": container with ID starting with faedb3ec3eeba85214be3fb9b3abe4ad3d75628cfd6916b94ce62a315a2f4425 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.138669 4736 scope.go:117] "RemoveContainer" containerID="7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.138970 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534\": container with ID starting with 7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534 not found: ID does not exist" containerID="7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.139004 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534"} err="failed to get container status \"7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534\": rpc error: code = NotFound desc = could not find container \"7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534\": container with ID starting with 7c5ba48e08d372e300cbbe3c13aca23b17d675e0a2081025e2ade4aaa5af4534 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.139029 4736 scope.go:117] "RemoveContainer" containerID="6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.140185 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9\": container with ID starting with 6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9 not found: ID does not exist" containerID="6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.140208 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9"} err="failed to get container status \"6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9\": rpc error: code = NotFound desc = could not find container \"6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9\": container with ID starting with 6a732e089df3ebec2a8fcaa3bbef2ce6dae881fd20946e3171033e0689ce60e9 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.140224 4736 scope.go:117] "RemoveContainer" containerID="f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.153575 4736 scope.go:117] "RemoveContainer" containerID="25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.168071 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d771cd-3ef9-44db-8981-3e8241e36f30" (UID: "d3d771cd-3ef9-44db-8981-3e8241e36f30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.175174 4736 scope.go:117] "RemoveContainer" containerID="a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.189336 4736 scope.go:117] "RemoveContainer" containerID="f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.190174 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d771cd-3ef9-44db-8981-3e8241e36f30-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.190546 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63\": container with ID starting with f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63 not found: ID does not exist" containerID="f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.190581 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63"} err="failed to get container status \"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63\": rpc error: code = NotFound desc = could not find container \"f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63\": container with ID starting with f11b5c4de491b5eae5af73d0306bcb44ae881c4ae6835f43036ed139b4506f63 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.190604 4736 scope.go:117] "RemoveContainer" containerID="25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.190859 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742\": container with ID starting with 25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742 not found: ID does not exist" containerID="25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.190880 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742"} err="failed to get container status \"25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742\": rpc error: code = NotFound desc = could not find container \"25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742\": container with ID starting with 25c835f696c8d0489d150482d5a2554f7c9ca9a544553cc033e8f0ae0451a742 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.190893 4736 scope.go:117] "RemoveContainer" containerID="a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.191111 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9\": container with ID starting with a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9 not found: ID does not exist" containerID="a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.191139 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9"} err="failed to get container status \"a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9\": rpc error: code = NotFound desc = could not find container \"a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9\": container with ID starting with a66e5c7649e7f0b4580e8b6622c167d758629e0afb0da7e8293d59e14b5e83b9 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.191180 4736 scope.go:117] "RemoveContainer" containerID="07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.203249 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v7kg4"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.208899 4736 scope.go:117] "RemoveContainer" containerID="93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e" Feb 14 10:48:12 crc kubenswrapper[4736]: W0214 10:48:12.212218 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7c3a8f_a5ad_4668_9ddc_9a0f33a1eed5.slice/crio-24859b700e0547a6c5aaa6e6c09b74d756cb9c1020ce9d73a14432d03a591f47 WatchSource:0}: Error finding container 24859b700e0547a6c5aaa6e6c09b74d756cb9c1020ce9d73a14432d03a591f47: Status 404 returned error can't find the container with id 24859b700e0547a6c5aaa6e6c09b74d756cb9c1020ce9d73a14432d03a591f47 Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.222273 4736 scope.go:117] "RemoveContainer" containerID="07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.222722 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995\": container with ID starting with 07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995 not found: ID does not exist" containerID="07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.222779 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995"} err="failed to get container status \"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995\": rpc error: code = NotFound desc = could not find container \"07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995\": container with ID starting with 07ed49cd92ed60f56a23024c2771f4cfccac8b52d9e6194f012b8bebb114d995 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.222805 4736 scope.go:117] "RemoveContainer" containerID="93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.223143 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e\": container with ID starting with 93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e not found: ID does not exist" containerID="93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.223188 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e"} err="failed to get container status \"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e\": rpc error: code = NotFound desc = could not find container \"93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e\": container with ID starting with 93becf87c47564c3edfbc5f50b828d2d328ee1650b500d816aac93735070bf5e not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.223211 4736 scope.go:117] "RemoveContainer" containerID="c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.253485 4736 scope.go:117] "RemoveContainer" containerID="ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.342096 4736 scope.go:117] "RemoveContainer" containerID="e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.373926 4736 scope.go:117] "RemoveContainer" containerID="c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.374352 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e\": container with ID starting with c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e not found: ID does not exist" containerID="c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.374397 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e"} err="failed to get container status \"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e\": rpc error: code = NotFound desc = could not find container \"c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e\": container with ID starting with c707558089a94beff075b9224d3810e0807cb622e57495649d9e72ad19964c6e not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.374420 4736 scope.go:117] "RemoveContainer" containerID="ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.377092 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9\": container with ID starting with ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9 not found: ID does not exist" containerID="ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.377138 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9"} err="failed to get container status \"ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9\": rpc error: code = NotFound desc = could not find container \"ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9\": container with ID starting with ceb8691e1a8b69f27399e208fea65e91206e9e57d835f78aa99cd6fced597dd9 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.377167 4736 scope.go:117] "RemoveContainer" containerID="e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12" Feb 14 10:48:12 crc kubenswrapper[4736]: E0214 10:48:12.377734 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12\": container with ID starting with e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12 not found: ID does not exist" containerID="e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.377819 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12"} err="failed to get container status \"e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12\": rpc error: code = NotFound desc = could not find container \"e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12\": container with ID starting with e8e66d0ee18952463f0778eb08cac486d2cf4caec9713137b0fb63fbbf078c12 not found: ID does not exist" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.403129 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" path="/var/lib/kubelet/pods/3f2feb07-1c8a-4c17-81a1-24f60ac3f31f/volumes" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.403775 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" path="/var/lib/kubelet/pods/b71b0996-cb92-4faa-9245-95f7e9afb7fb/volumes" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.404482 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" path="/var/lib/kubelet/pods/d8991afa-da38-4dd2-9f58-cf895ec92784/volumes" Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.420811 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.440646 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpgts"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.441484 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:48:12 crc kubenswrapper[4736]: I0214 10:48:12.444196 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46g9b"] Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.049784 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" event={"ID":"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5","Type":"ContainerStarted","Data":"460caa18efe354df8b9e1b9d8b63d26363562fdc5cc7057255d86a8399bd7173"} Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.049830 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" event={"ID":"9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5","Type":"ContainerStarted","Data":"24859b700e0547a6c5aaa6e6c09b74d756cb9c1020ce9d73a14432d03a591f47"} Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.049961 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.055721 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.066442 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v7kg4" podStartSLOduration=2.066421882 podStartE2EDuration="2.066421882s" podCreationTimestamp="2026-02-14 10:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:48:13.063801829 +0000 UTC m=+403.432429217" watchObservedRunningTime="2026-02-14 10:48:13.066421882 +0000 UTC m=+403.435049260" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524640 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c5lgg"] Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524848 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524860 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524870 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524876 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524884 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524889 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524898 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524903 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524912 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524918 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524925 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524931 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524937 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524942 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524950 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524955 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524963 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524969 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524977 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524982 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.524990 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.524996 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="extract-content" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.525007 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525012 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.525018 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525024 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: E0214 10:48:13.525032 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525038 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="extract-utilities" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525119 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525129 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2feb07-1c8a-4c17-81a1-24f60ac3f31f" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525137 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525147 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71b0996-cb92-4faa-9245-95f7e9afb7fb" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525154 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" containerName="registry-server" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.525296 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8991afa-da38-4dd2-9f58-cf895ec92784" containerName="marketplace-operator" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.526162 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.528960 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.537052 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5lgg"] Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.615984 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6m46\" (UniqueName: \"kubernetes.io/projected/42f5df5b-7b21-45af-beb2-52f4bd141bb5-kube-api-access-h6m46\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.616061 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-utilities\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.616149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-catalog-content\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.717277 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6m46\" (UniqueName: \"kubernetes.io/projected/42f5df5b-7b21-45af-beb2-52f4bd141bb5-kube-api-access-h6m46\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.717320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-utilities\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.717362 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-catalog-content\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.717738 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-catalog-content\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.717958 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f5df5b-7b21-45af-beb2-52f4bd141bb5-utilities\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.718001 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcbsv"] Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.718846 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.720698 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.740275 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6m46\" (UniqueName: \"kubernetes.io/projected/42f5df5b-7b21-45af-beb2-52f4bd141bb5-kube-api-access-h6m46\") pod \"redhat-marketplace-c5lgg\" (UID: \"42f5df5b-7b21-45af-beb2-52f4bd141bb5\") " pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.766580 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcbsv"] Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.818561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtrkr\" (UniqueName: \"kubernetes.io/projected/aa47e46f-8ce5-4184-8167-7951842f215e-kube-api-access-xtrkr\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.818646 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-catalog-content\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.818682 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-utilities\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.857302 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.920129 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-catalog-content\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.920500 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-utilities\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.920549 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtrkr\" (UniqueName: \"kubernetes.io/projected/aa47e46f-8ce5-4184-8167-7951842f215e-kube-api-access-xtrkr\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.920630 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-catalog-content\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.920843 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa47e46f-8ce5-4184-8167-7951842f215e-utilities\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:13 crc kubenswrapper[4736]: I0214 10:48:13.935942 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtrkr\" (UniqueName: \"kubernetes.io/projected/aa47e46f-8ce5-4184-8167-7951842f215e-kube-api-access-xtrkr\") pod \"redhat-operators-bcbsv\" (UID: \"aa47e46f-8ce5-4184-8167-7951842f215e\") " pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:14 crc kubenswrapper[4736]: I0214 10:48:14.039454 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:14 crc kubenswrapper[4736]: I0214 10:48:14.275680 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5lgg"] Feb 14 10:48:14 crc kubenswrapper[4736]: W0214 10:48:14.280555 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f5df5b_7b21_45af_beb2_52f4bd141bb5.slice/crio-8473bdda0c73b65afe553c45ca929b094e3e5c9f637f55370ff251bdbf676d29 WatchSource:0}: Error finding container 8473bdda0c73b65afe553c45ca929b094e3e5c9f637f55370ff251bdbf676d29: Status 404 returned error can't find the container with id 8473bdda0c73b65afe553c45ca929b094e3e5c9f637f55370ff251bdbf676d29 Feb 14 10:48:14 crc kubenswrapper[4736]: I0214 10:48:14.403457 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c96a86-aadc-46d0-bca7-3d9fcca42ec3" path="/var/lib/kubelet/pods/36c96a86-aadc-46d0-bca7-3d9fcca42ec3/volumes" Feb 14 10:48:14 crc kubenswrapper[4736]: I0214 10:48:14.404162 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d771cd-3ef9-44db-8981-3e8241e36f30" path="/var/lib/kubelet/pods/d3d771cd-3ef9-44db-8981-3e8241e36f30/volumes" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.053968 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcbsv"] Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.065171 4736 generic.go:334] "Generic (PLEG): container finished" podID="42f5df5b-7b21-45af-beb2-52f4bd141bb5" containerID="d5ac2553c5f070eb9fe2775ea2b63a88bb88d31702d16eaf5dc949ea27721226" exitCode=0 Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.065881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5lgg" event={"ID":"42f5df5b-7b21-45af-beb2-52f4bd141bb5","Type":"ContainerDied","Data":"d5ac2553c5f070eb9fe2775ea2b63a88bb88d31702d16eaf5dc949ea27721226"} Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.065929 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5lgg" event={"ID":"42f5df5b-7b21-45af-beb2-52f4bd141bb5","Type":"ContainerStarted","Data":"8473bdda0c73b65afe553c45ca929b094e3e5c9f637f55370ff251bdbf676d29"} Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.305345 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.305972 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" podUID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" containerName="route-controller-manager" containerID="cri-o://bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022" gracePeriod=30 Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.350883 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-94s4t"] Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.356182 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.358385 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94s4t"] Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.360537 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.438977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-utilities\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.439048 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-catalog-content\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.439136 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4g9\" (UniqueName: \"kubernetes.io/projected/b01fa613-73d4-4246-a376-02723ee39286-kube-api-access-8v4g9\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.540982 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4g9\" (UniqueName: \"kubernetes.io/projected/b01fa613-73d4-4246-a376-02723ee39286-kube-api-access-8v4g9\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.541054 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-utilities\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.541084 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-catalog-content\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.541529 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-catalog-content\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.542397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fa613-73d4-4246-a376-02723ee39286-utilities\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.568360 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4g9\" (UniqueName: \"kubernetes.io/projected/b01fa613-73d4-4246-a376-02723ee39286-kube-api-access-8v4g9\") pod \"certified-operators-94s4t\" (UID: \"b01fa613-73d4-4246-a376-02723ee39286\") " pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.639193 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.702836 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.743141 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlp8\" (UniqueName: \"kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8\") pod \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.743792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config\") pod \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.743839 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca\") pod \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.743864 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert\") pod \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\" (UID: \"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac\") " Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.744326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca" (OuterVolumeSpecName: "client-ca") pod "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" (UID: "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.744335 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config" (OuterVolumeSpecName: "config") pod "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" (UID: "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.746840 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8" (OuterVolumeSpecName: "kube-api-access-9nlp8") pod "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" (UID: "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac"). InnerVolumeSpecName "kube-api-access-9nlp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.746891 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" (UID: "47712b8b-7ad1-44cd-9e6e-a584baa7b2ac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.845355 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlp8\" (UniqueName: \"kubernetes.io/projected/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-kube-api-access-9nlp8\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.845381 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.845390 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:15 crc kubenswrapper[4736]: I0214 10:48:15.845397 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.070938 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5lgg" event={"ID":"42f5df5b-7b21-45af-beb2-52f4bd141bb5","Type":"ContainerStarted","Data":"99ff7c4136068bab3e3d7c7ac2b15ffcabbf28844a9ed1c0b9efe22434188b09"} Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.073607 4736 generic.go:334] "Generic (PLEG): container finished" podID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" containerID="bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022" exitCode=0 Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.073850 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.074330 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" event={"ID":"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac","Type":"ContainerDied","Data":"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022"} Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.074458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz" event={"ID":"47712b8b-7ad1-44cd-9e6e-a584baa7b2ac","Type":"ContainerDied","Data":"79328277bd275176ab8d0f24dc18d6dfdcf8c17063a68acdfd11527bb11c197f"} Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.074484 4736 scope.go:117] "RemoveContainer" containerID="bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.075186 4736 generic.go:334] "Generic (PLEG): container finished" podID="aa47e46f-8ce5-4184-8167-7951842f215e" containerID="e5c6ea0c1911decd5b2a521bd5479a1477d957586ca29cbcebc8b85351c0cff5" exitCode=0 Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.075205 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbsv" event={"ID":"aa47e46f-8ce5-4184-8167-7951842f215e","Type":"ContainerDied","Data":"e5c6ea0c1911decd5b2a521bd5479a1477d957586ca29cbcebc8b85351c0cff5"} Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.075218 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbsv" event={"ID":"aa47e46f-8ce5-4184-8167-7951842f215e","Type":"ContainerStarted","Data":"3e3e84300501292cab5182e24cff8e0a1d24ea4ccb324f7d3616abd57903c1aa"} Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.092699 4736 scope.go:117] "RemoveContainer" containerID="bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022" Feb 14 10:48:16 crc kubenswrapper[4736]: E0214 10:48:16.093672 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022\": container with ID starting with bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022 not found: ID does not exist" containerID="bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.093696 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022"} err="failed to get container status \"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022\": rpc error: code = NotFound desc = could not find container \"bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022\": container with ID starting with bd59e71ee6432cd33971a7b9b7bb66e8c7388c806ba8405ceba3e1be5c7b3022 not found: ID does not exist" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.129869 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.137327 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k4zsg"] Feb 14 10:48:16 crc kubenswrapper[4736]: E0214 10:48:16.137559 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" containerName="route-controller-manager" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.137575 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" containerName="route-controller-manager" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.137652 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" containerName="route-controller-manager" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.138341 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.141199 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.149530 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5b76bdb6-dmztz"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.157477 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4zsg"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.181384 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94s4t"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.249948 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-utilities\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.250429 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-catalog-content\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.250579 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqlv\" (UniqueName: \"kubernetes.io/projected/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-kube-api-access-qtqlv\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.351804 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-catalog-content\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.351845 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtqlv\" (UniqueName: \"kubernetes.io/projected/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-kube-api-access-qtqlv\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.351871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-utilities\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.352260 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-catalog-content\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.352292 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-utilities\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.368370 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtqlv\" (UniqueName: \"kubernetes.io/projected/af5139d3-a470-4c13-a66a-1fcf2eb8cd7b-kube-api-access-qtqlv\") pod \"community-operators-k4zsg\" (UID: \"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b\") " pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.405530 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47712b8b-7ad1-44cd-9e6e-a584baa7b2ac" path="/var/lib/kubelet/pods/47712b8b-7ad1-44cd-9e6e-a584baa7b2ac/volumes" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.512052 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.911020 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4zsg"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.942105 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.942800 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.946955 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.947105 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.947164 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.947327 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.947405 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.947487 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.960161 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl"] Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.961358 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2zq\" (UniqueName: \"kubernetes.io/projected/81823b14-a12c-45dd-bf63-c374c9e8939d-kube-api-access-kz2zq\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.961389 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-config\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.961430 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-client-ca\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:16 crc kubenswrapper[4736]: I0214 10:48:16.961459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81823b14-a12c-45dd-bf63-c374c9e8939d-serving-cert\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.062557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-client-ca\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.063088 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81823b14-a12c-45dd-bf63-c374c9e8939d-serving-cert\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.063207 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2zq\" (UniqueName: \"kubernetes.io/projected/81823b14-a12c-45dd-bf63-c374c9e8939d-kube-api-access-kz2zq\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.063313 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-config\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.063560 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-client-ca\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.065181 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81823b14-a12c-45dd-bf63-c374c9e8939d-config\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.072500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81823b14-a12c-45dd-bf63-c374c9e8939d-serving-cert\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.086019 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2zq\" (UniqueName: \"kubernetes.io/projected/81823b14-a12c-45dd-bf63-c374c9e8939d-kube-api-access-kz2zq\") pod \"route-controller-manager-69f5dd6f66-mjhjl\" (UID: \"81823b14-a12c-45dd-bf63-c374c9e8939d\") " pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.089315 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbsv" event={"ID":"aa47e46f-8ce5-4184-8167-7951842f215e","Type":"ContainerStarted","Data":"df9dbc213c98641cfbe72c115daacdf8e5bc046d24d756caeef156b1c4ccf372"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.094350 4736 generic.go:334] "Generic (PLEG): container finished" podID="af5139d3-a470-4c13-a66a-1fcf2eb8cd7b" containerID="54ae9a1360f3a94919a51995542955dc8710ee507dd5dd59e1a6133f1125bf07" exitCode=0 Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.094439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4zsg" event={"ID":"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b","Type":"ContainerDied","Data":"54ae9a1360f3a94919a51995542955dc8710ee507dd5dd59e1a6133f1125bf07"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.094467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4zsg" event={"ID":"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b","Type":"ContainerStarted","Data":"c76e558f8c8bdf66c925d55269fdff7a9f46c88ec1998a4b3d10c35e71cf835a"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.097552 4736 generic.go:334] "Generic (PLEG): container finished" podID="42f5df5b-7b21-45af-beb2-52f4bd141bb5" containerID="99ff7c4136068bab3e3d7c7ac2b15ffcabbf28844a9ed1c0b9efe22434188b09" exitCode=0 Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.097607 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5lgg" event={"ID":"42f5df5b-7b21-45af-beb2-52f4bd141bb5","Type":"ContainerDied","Data":"99ff7c4136068bab3e3d7c7ac2b15ffcabbf28844a9ed1c0b9efe22434188b09"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.103164 4736 generic.go:334] "Generic (PLEG): container finished" podID="b01fa613-73d4-4246-a376-02723ee39286" containerID="e620547e6dfe25dcfe73d1f6fd486c26ea1cd5f330845aa8c30f2de16ec1e0a0" exitCode=0 Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.103427 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94s4t" event={"ID":"b01fa613-73d4-4246-a376-02723ee39286","Type":"ContainerDied","Data":"e620547e6dfe25dcfe73d1f6fd486c26ea1cd5f330845aa8c30f2de16ec1e0a0"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.103461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94s4t" event={"ID":"b01fa613-73d4-4246-a376-02723ee39286","Type":"ContainerStarted","Data":"b8029549589395d7204dc2e00e7d6206f65119b7e634c85f8c35c12256e53605"} Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.280945 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.696114 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.696794 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:48:17 crc kubenswrapper[4736]: I0214 10:48:17.732540 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl"] Feb 14 10:48:17 crc kubenswrapper[4736]: W0214 10:48:17.736303 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81823b14_a12c_45dd_bf63_c374c9e8939d.slice/crio-59dc1278bf176e2c600b97b8088472a54a3d8ae71dd13f0c269d07c3370b9558 WatchSource:0}: Error finding container 59dc1278bf176e2c600b97b8088472a54a3d8ae71dd13f0c269d07c3370b9558: Status 404 returned error can't find the container with id 59dc1278bf176e2c600b97b8088472a54a3d8ae71dd13f0c269d07c3370b9558 Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.116693 4736 generic.go:334] "Generic (PLEG): container finished" podID="b01fa613-73d4-4246-a376-02723ee39286" containerID="e6e4a7382ed4a0d8ec17b81e170ca244f8bce2fd4970901ed4127ebaf5a62fe4" exitCode=0 Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.117869 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94s4t" event={"ID":"b01fa613-73d4-4246-a376-02723ee39286","Type":"ContainerDied","Data":"e6e4a7382ed4a0d8ec17b81e170ca244f8bce2fd4970901ed4127ebaf5a62fe4"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.120581 4736 generic.go:334] "Generic (PLEG): container finished" podID="aa47e46f-8ce5-4184-8167-7951842f215e" containerID="df9dbc213c98641cfbe72c115daacdf8e5bc046d24d756caeef156b1c4ccf372" exitCode=0 Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.120617 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbsv" event={"ID":"aa47e46f-8ce5-4184-8167-7951842f215e","Type":"ContainerDied","Data":"df9dbc213c98641cfbe72c115daacdf8e5bc046d24d756caeef156b1c4ccf372"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.124856 4736 generic.go:334] "Generic (PLEG): container finished" podID="af5139d3-a470-4c13-a66a-1fcf2eb8cd7b" containerID="d81dbc281e2b6369529d0e0fab30473f315734637aae31a13ab8f1b827a68ff1" exitCode=0 Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.124911 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4zsg" event={"ID":"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b","Type":"ContainerDied","Data":"d81dbc281e2b6369529d0e0fab30473f315734637aae31a13ab8f1b827a68ff1"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.131913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5lgg" event={"ID":"42f5df5b-7b21-45af-beb2-52f4bd141bb5","Type":"ContainerStarted","Data":"79ce9866aad1d967d7807123ebe739d0c287a3bafcd41a1287208ef6d0691563"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.133509 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" event={"ID":"81823b14-a12c-45dd-bf63-c374c9e8939d","Type":"ContainerStarted","Data":"3f7f32d238d1fe0abbc471b7032991a8bb7e0c9d8601c856c90ae7be5417bbe3"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.133536 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" event={"ID":"81823b14-a12c-45dd-bf63-c374c9e8939d","Type":"ContainerStarted","Data":"59dc1278bf176e2c600b97b8088472a54a3d8ae71dd13f0c269d07c3370b9558"} Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.134176 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.176976 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c5lgg" podStartSLOduration=2.634693606 podStartE2EDuration="5.17695824s" podCreationTimestamp="2026-02-14 10:48:13 +0000 UTC" firstStartedPulling="2026-02-14 10:48:15.066438125 +0000 UTC m=+405.435065493" lastFinishedPulling="2026-02-14 10:48:17.608702759 +0000 UTC m=+407.977330127" observedRunningTime="2026-02-14 10:48:18.175721856 +0000 UTC m=+408.544349234" watchObservedRunningTime="2026-02-14 10:48:18.17695824 +0000 UTC m=+408.545585598" Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.204704 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" podStartSLOduration=3.204689211 podStartE2EDuration="3.204689211s" podCreationTimestamp="2026-02-14 10:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:48:18.201721948 +0000 UTC m=+408.570349316" watchObservedRunningTime="2026-02-14 10:48:18.204689211 +0000 UTC m=+408.573316579" Feb 14 10:48:18 crc kubenswrapper[4736]: I0214 10:48:18.358094 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f5dd6f66-mjhjl" Feb 14 10:48:19 crc kubenswrapper[4736]: I0214 10:48:19.139617 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94s4t" event={"ID":"b01fa613-73d4-4246-a376-02723ee39286","Type":"ContainerStarted","Data":"d30ba80a0e953016b784a966c063df6f110b3ad7e5db72ebd4bfe3cdd4396873"} Feb 14 10:48:19 crc kubenswrapper[4736]: I0214 10:48:19.141600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4zsg" event={"ID":"af5139d3-a470-4c13-a66a-1fcf2eb8cd7b","Type":"ContainerStarted","Data":"8998854da72c7f18834fbc1bc066cc78677dadf6bf73124aa7b0355dae6408f3"} Feb 14 10:48:19 crc kubenswrapper[4736]: I0214 10:48:19.143163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcbsv" event={"ID":"aa47e46f-8ce5-4184-8167-7951842f215e","Type":"ContainerStarted","Data":"9cad972c1f86158c50f636906bd41a9a0cb71f518987136b65e0e26cdc7c1fd6"} Feb 14 10:48:19 crc kubenswrapper[4736]: I0214 10:48:19.163205 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-94s4t" podStartSLOduration=2.783561993 podStartE2EDuration="4.163188444s" podCreationTimestamp="2026-02-14 10:48:15 +0000 UTC" firstStartedPulling="2026-02-14 10:48:17.106107065 +0000 UTC m=+407.474734433" lastFinishedPulling="2026-02-14 10:48:18.485733516 +0000 UTC m=+408.854360884" observedRunningTime="2026-02-14 10:48:19.162254228 +0000 UTC m=+409.530881596" watchObservedRunningTime="2026-02-14 10:48:19.163188444 +0000 UTC m=+409.531815812" Feb 14 10:48:19 crc kubenswrapper[4736]: I0214 10:48:19.180345 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k4zsg" podStartSLOduration=1.719283673 podStartE2EDuration="3.180328287s" podCreationTimestamp="2026-02-14 10:48:16 +0000 UTC" firstStartedPulling="2026-02-14 10:48:17.096959587 +0000 UTC m=+407.465586965" lastFinishedPulling="2026-02-14 10:48:18.558004201 +0000 UTC m=+408.926631579" observedRunningTime="2026-02-14 10:48:19.179415781 +0000 UTC m=+409.548043169" watchObservedRunningTime="2026-02-14 10:48:19.180328287 +0000 UTC m=+409.548955655" Feb 14 10:48:23 crc kubenswrapper[4736]: I0214 10:48:23.858368 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:23 crc kubenswrapper[4736]: I0214 10:48:23.861004 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:23 crc kubenswrapper[4736]: I0214 10:48:23.906101 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:23 crc kubenswrapper[4736]: I0214 10:48:23.922159 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcbsv" podStartSLOduration=8.380513607 podStartE2EDuration="10.922130072s" podCreationTimestamp="2026-02-14 10:48:13 +0000 UTC" firstStartedPulling="2026-02-14 10:48:16.07652059 +0000 UTC m=+406.445147958" lastFinishedPulling="2026-02-14 10:48:18.618137055 +0000 UTC m=+408.986764423" observedRunningTime="2026-02-14 10:48:19.206760641 +0000 UTC m=+409.575388019" watchObservedRunningTime="2026-02-14 10:48:23.922130072 +0000 UTC m=+414.290757430" Feb 14 10:48:24 crc kubenswrapper[4736]: I0214 10:48:24.040112 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:24 crc kubenswrapper[4736]: I0214 10:48:24.040943 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:24 crc kubenswrapper[4736]: I0214 10:48:24.219728 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c5lgg" Feb 14 10:48:25 crc kubenswrapper[4736]: I0214 10:48:25.084292 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bcbsv" podUID="aa47e46f-8ce5-4184-8167-7951842f215e" containerName="registry-server" probeResult="failure" output=< Feb 14 10:48:25 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 10:48:25 crc kubenswrapper[4736]: > Feb 14 10:48:25 crc kubenswrapper[4736]: I0214 10:48:25.703653 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:25 crc kubenswrapper[4736]: I0214 10:48:25.703695 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:25 crc kubenswrapper[4736]: I0214 10:48:25.742666 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:26 crc kubenswrapper[4736]: I0214 10:48:26.246143 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-94s4t" Feb 14 10:48:26 crc kubenswrapper[4736]: I0214 10:48:26.512721 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:26 crc kubenswrapper[4736]: I0214 10:48:26.512904 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:26 crc kubenswrapper[4736]: I0214 10:48:26.575101 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:27 crc kubenswrapper[4736]: I0214 10:48:27.222009 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k4zsg" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.097650 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.154396 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcbsv" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.420891 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" podUID="7b9a5589-a45e-4203-aea7-266e2dfa5088" containerName="registry" containerID="cri-o://c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381" gracePeriod=30 Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.775401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817397 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817449 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817512 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817666 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817707 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpwbm\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817734 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.817785 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted\") pod \"7b9a5589-a45e-4203-aea7-266e2dfa5088\" (UID: \"7b9a5589-a45e-4203-aea7-266e2dfa5088\") " Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.820410 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.820784 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.835375 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.838282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.840599 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm" (OuterVolumeSpecName: "kube-api-access-bpwbm") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "kube-api-access-bpwbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.842117 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.880093 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927431 4736 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927474 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927489 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpwbm\" (UniqueName: \"kubernetes.io/projected/7b9a5589-a45e-4203-aea7-266e2dfa5088-kube-api-access-bpwbm\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927508 4736 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927520 4736 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9a5589-a45e-4203-aea7-266e2dfa5088-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927533 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9a5589-a45e-4203-aea7-266e2dfa5088-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.927544 4736 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9a5589-a45e-4203-aea7-266e2dfa5088-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 10:48:34 crc kubenswrapper[4736]: I0214 10:48:34.942194 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7b9a5589-a45e-4203-aea7-266e2dfa5088" (UID: "7b9a5589-a45e-4203-aea7-266e2dfa5088"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.222076 4736 generic.go:334] "Generic (PLEG): container finished" podID="7b9a5589-a45e-4203-aea7-266e2dfa5088" containerID="c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381" exitCode=0 Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.222117 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" event={"ID":"7b9a5589-a45e-4203-aea7-266e2dfa5088","Type":"ContainerDied","Data":"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381"} Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.222140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" event={"ID":"7b9a5589-a45e-4203-aea7-266e2dfa5088","Type":"ContainerDied","Data":"152cb58cfebb0d60395a65fa5b3b6b2fa3afc86cd4e3b45a1b8cd382b01a07be"} Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.222155 4736 scope.go:117] "RemoveContainer" containerID="c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381" Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.222214 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fss8" Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.239982 4736 scope.go:117] "RemoveContainer" containerID="c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381" Feb 14 10:48:35 crc kubenswrapper[4736]: E0214 10:48:35.241103 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381\": container with ID starting with c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381 not found: ID does not exist" containerID="c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381" Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.241144 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381"} err="failed to get container status \"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381\": rpc error: code = NotFound desc = could not find container \"c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381\": container with ID starting with c47e2fcec861d83bfad48cf287aa6b13d93de474595e417e338d4c53bb4f4381 not found: ID does not exist" Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.262772 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:48:35 crc kubenswrapper[4736]: I0214 10:48:35.266956 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fss8"] Feb 14 10:48:36 crc kubenswrapper[4736]: I0214 10:48:36.407805 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9a5589-a45e-4203-aea7-266e2dfa5088" path="/var/lib/kubelet/pods/7b9a5589-a45e-4203-aea7-266e2dfa5088/volumes" Feb 14 10:48:47 crc kubenswrapper[4736]: I0214 10:48:47.696071 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:48:47 crc kubenswrapper[4736]: I0214 10:48:47.696871 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:48:47 crc kubenswrapper[4736]: I0214 10:48:47.696966 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:48:47 crc kubenswrapper[4736]: I0214 10:48:47.697966 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 10:48:47 crc kubenswrapper[4736]: I0214 10:48:47.698073 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c" gracePeriod=600 Feb 14 10:48:48 crc kubenswrapper[4736]: I0214 10:48:48.304499 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c" exitCode=0 Feb 14 10:48:48 crc kubenswrapper[4736]: I0214 10:48:48.304618 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c"} Feb 14 10:48:48 crc kubenswrapper[4736]: I0214 10:48:48.304909 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8"} Feb 14 10:48:48 crc kubenswrapper[4736]: I0214 10:48:48.304936 4736 scope.go:117] "RemoveContainer" containerID="e171ba176d1753039f577b6d0ee72115dc107fe53ad81964d40ece0d04b39299" Feb 14 10:51:17 crc kubenswrapper[4736]: I0214 10:51:17.695410 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:51:17 crc kubenswrapper[4736]: I0214 10:51:17.696071 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:51:47 crc kubenswrapper[4736]: I0214 10:51:47.696494 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:51:47 crc kubenswrapper[4736]: I0214 10:51:47.697295 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:52:17 crc kubenswrapper[4736]: I0214 10:52:17.696154 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:52:17 crc kubenswrapper[4736]: I0214 10:52:17.696832 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:52:17 crc kubenswrapper[4736]: I0214 10:52:17.696888 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:52:17 crc kubenswrapper[4736]: I0214 10:52:17.697567 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 10:52:17 crc kubenswrapper[4736]: I0214 10:52:17.697641 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8" gracePeriod=600 Feb 14 10:52:18 crc kubenswrapper[4736]: I0214 10:52:18.706485 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8" exitCode=0 Feb 14 10:52:18 crc kubenswrapper[4736]: I0214 10:52:18.706648 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8"} Feb 14 10:52:18 crc kubenswrapper[4736]: I0214 10:52:18.708021 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e"} Feb 14 10:52:18 crc kubenswrapper[4736]: I0214 10:52:18.708113 4736 scope.go:117] "RemoveContainer" containerID="e4353db1ef94e0c6a61744f3f92cc8b153d1413a006e218ae9bd6f191757294c" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.215223 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh"] Feb 14 10:53:18 crc kubenswrapper[4736]: E0214 10:53:18.215936 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a5589-a45e-4203-aea7-266e2dfa5088" containerName="registry" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.215949 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a5589-a45e-4203-aea7-266e2dfa5088" containerName="registry" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.216062 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a5589-a45e-4203-aea7-266e2dfa5088" containerName="registry" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.216416 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.218566 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.218764 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.220083 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dxffx" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.225034 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-lsgkg"] Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.225653 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lsgkg" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.227716 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6x5pr" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.230711 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh"] Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.242662 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vg8jq"] Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.244157 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.246789 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5tzg5" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.263092 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lsgkg"] Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.263432 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9c6z\" (UniqueName: \"kubernetes.io/projected/d7a4fec3-20be-4ba1-838e-45d9a777ba6a-kube-api-access-p9c6z\") pod \"cert-manager-webhook-687f57d79b-vg8jq\" (UID: \"d7a4fec3-20be-4ba1-838e-45d9a777ba6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.263480 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgk7\" (UniqueName: \"kubernetes.io/projected/70c4aa44-ebfe-49e1-9e2a-d4f507794c4e-kube-api-access-cwgk7\") pod \"cert-manager-858654f9db-lsgkg\" (UID: \"70c4aa44-ebfe-49e1-9e2a-d4f507794c4e\") " pod="cert-manager/cert-manager-858654f9db-lsgkg" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.263538 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vrx\" (UniqueName: \"kubernetes.io/projected/c30450ff-d5e3-482b-9d67-63ac08a238e2-kube-api-access-r2vrx\") pod \"cert-manager-cainjector-cf98fcc89-xbtbh\" (UID: \"c30450ff-d5e3-482b-9d67-63ac08a238e2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.276133 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vg8jq"] Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.364463 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgk7\" (UniqueName: \"kubernetes.io/projected/70c4aa44-ebfe-49e1-9e2a-d4f507794c4e-kube-api-access-cwgk7\") pod \"cert-manager-858654f9db-lsgkg\" (UID: \"70c4aa44-ebfe-49e1-9e2a-d4f507794c4e\") " pod="cert-manager/cert-manager-858654f9db-lsgkg" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.364530 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vrx\" (UniqueName: \"kubernetes.io/projected/c30450ff-d5e3-482b-9d67-63ac08a238e2-kube-api-access-r2vrx\") pod \"cert-manager-cainjector-cf98fcc89-xbtbh\" (UID: \"c30450ff-d5e3-482b-9d67-63ac08a238e2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.364587 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9c6z\" (UniqueName: \"kubernetes.io/projected/d7a4fec3-20be-4ba1-838e-45d9a777ba6a-kube-api-access-p9c6z\") pod \"cert-manager-webhook-687f57d79b-vg8jq\" (UID: \"d7a4fec3-20be-4ba1-838e-45d9a777ba6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.382314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgk7\" (UniqueName: \"kubernetes.io/projected/70c4aa44-ebfe-49e1-9e2a-d4f507794c4e-kube-api-access-cwgk7\") pod \"cert-manager-858654f9db-lsgkg\" (UID: \"70c4aa44-ebfe-49e1-9e2a-d4f507794c4e\") " pod="cert-manager/cert-manager-858654f9db-lsgkg" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.385243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vrx\" (UniqueName: \"kubernetes.io/projected/c30450ff-d5e3-482b-9d67-63ac08a238e2-kube-api-access-r2vrx\") pod \"cert-manager-cainjector-cf98fcc89-xbtbh\" (UID: \"c30450ff-d5e3-482b-9d67-63ac08a238e2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.386694 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9c6z\" (UniqueName: \"kubernetes.io/projected/d7a4fec3-20be-4ba1-838e-45d9a777ba6a-kube-api-access-p9c6z\") pod \"cert-manager-webhook-687f57d79b-vg8jq\" (UID: \"d7a4fec3-20be-4ba1-838e-45d9a777ba6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.535532 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.543718 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lsgkg" Feb 14 10:53:18 crc kubenswrapper[4736]: I0214 10:53:18.563135 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.022518 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vg8jq"] Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.032867 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 10:53:19 crc kubenswrapper[4736]: W0214 10:53:19.068835 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc30450ff_d5e3_482b_9d67_63ac08a238e2.slice/crio-e05077fec077785a151afba9090ecef5a4109a5d5b14cb42212f59107a9aa204 WatchSource:0}: Error finding container e05077fec077785a151afba9090ecef5a4109a5d5b14cb42212f59107a9aa204: Status 404 returned error can't find the container with id e05077fec077785a151afba9090ecef5a4109a5d5b14cb42212f59107a9aa204 Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.068845 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh"] Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.073485 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lsgkg"] Feb 14 10:53:19 crc kubenswrapper[4736]: W0214 10:53:19.078693 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70c4aa44_ebfe_49e1_9e2a_d4f507794c4e.slice/crio-686925a25b53d313fd1b159a6fc03377c9451caa1e249ec3608e325e312e734e WatchSource:0}: Error finding container 686925a25b53d313fd1b159a6fc03377c9451caa1e249ec3608e325e312e734e: Status 404 returned error can't find the container with id 686925a25b53d313fd1b159a6fc03377c9451caa1e249ec3608e325e312e734e Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.535614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" event={"ID":"d7a4fec3-20be-4ba1-838e-45d9a777ba6a","Type":"ContainerStarted","Data":"c74f12304c27e8f7dd3d5b1c699c12bf0848a026a29088b39f3b1846e19e18d5"} Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.536476 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" event={"ID":"c30450ff-d5e3-482b-9d67-63ac08a238e2","Type":"ContainerStarted","Data":"e05077fec077785a151afba9090ecef5a4109a5d5b14cb42212f59107a9aa204"} Feb 14 10:53:19 crc kubenswrapper[4736]: I0214 10:53:19.537666 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lsgkg" event={"ID":"70c4aa44-ebfe-49e1-9e2a-d4f507794c4e","Type":"ContainerStarted","Data":"686925a25b53d313fd1b159a6fc03377c9451caa1e249ec3608e325e312e734e"} Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.567079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lsgkg" event={"ID":"70c4aa44-ebfe-49e1-9e2a-d4f507794c4e","Type":"ContainerStarted","Data":"bfc9f3d85d8a3ce6f01497875ac8832dbcac570194ddcbd1d124c3c6f899f3dc"} Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.569403 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" event={"ID":"d7a4fec3-20be-4ba1-838e-45d9a777ba6a","Type":"ContainerStarted","Data":"78f1c785aeaf35f5c68aaf733d1275a191a7b37b1136c95aba6dda2b5955de1f"} Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.569464 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.572192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" event={"ID":"c30450ff-d5e3-482b-9d67-63ac08a238e2","Type":"ContainerStarted","Data":"c5f9a1f14c338a11cb39f58156c7240c17d9559db8d55ebbde184ac84bba8527"} Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.603838 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-lsgkg" podStartSLOduration=1.940378516 podStartE2EDuration="5.603805564s" podCreationTimestamp="2026-02-14 10:53:18 +0000 UTC" firstStartedPulling="2026-02-14 10:53:19.080545316 +0000 UTC m=+709.449172684" lastFinishedPulling="2026-02-14 10:53:22.743972364 +0000 UTC m=+713.112599732" observedRunningTime="2026-02-14 10:53:23.592737562 +0000 UTC m=+713.961364980" watchObservedRunningTime="2026-02-14 10:53:23.603805564 +0000 UTC m=+713.972432972" Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.648492 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xbtbh" podStartSLOduration=2.083512564 podStartE2EDuration="5.648472471s" podCreationTimestamp="2026-02-14 10:53:18 +0000 UTC" firstStartedPulling="2026-02-14 10:53:19.07309972 +0000 UTC m=+709.441727088" lastFinishedPulling="2026-02-14 10:53:22.638059627 +0000 UTC m=+713.006686995" observedRunningTime="2026-02-14 10:53:23.62881753 +0000 UTC m=+713.997444908" watchObservedRunningTime="2026-02-14 10:53:23.648472471 +0000 UTC m=+714.017099849" Feb 14 10:53:23 crc kubenswrapper[4736]: I0214 10:53:23.650884 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" podStartSLOduration=1.871469424 podStartE2EDuration="5.650877031s" podCreationTimestamp="2026-02-14 10:53:18 +0000 UTC" firstStartedPulling="2026-02-14 10:53:19.032611734 +0000 UTC m=+709.401239102" lastFinishedPulling="2026-02-14 10:53:22.812019341 +0000 UTC m=+713.180646709" observedRunningTime="2026-02-14 10:53:23.646934037 +0000 UTC m=+714.015561415" watchObservedRunningTime="2026-02-14 10:53:23.650877031 +0000 UTC m=+714.019504409" Feb 14 10:53:28 crc kubenswrapper[4736]: I0214 10:53:28.567582 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-vg8jq" Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.840493 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7vfr"] Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.841998 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-controller" containerID="cri-o://4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842080 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842098 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-node" containerID="cri-o://b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842182 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="northd" containerID="cri-o://bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842227 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="sbdb" containerID="cri-o://8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842207 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-acl-logging" containerID="cri-o://260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.842361 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="nbdb" containerID="cri-o://6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" gracePeriod=30 Feb 14 10:53:33 crc kubenswrapper[4736]: I0214 10:53:33.906983 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" containerID="cri-o://92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" gracePeriod=30 Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.646906 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/2.log" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.648203 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/1.log" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.648573 4736 generic.go:334] "Generic (PLEG): container finished" podID="db7224ab-d0ab-49e3-9154-4d9047057681" containerID="a9a51d42096cd417ea48f3ae1a8ec91320986b90813f073e061032c9ca97040f" exitCode=2 Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.648661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerDied","Data":"a9a51d42096cd417ea48f3ae1a8ec91320986b90813f073e061032c9ca97040f"} Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.649230 4736 scope.go:117] "RemoveContainer" containerID="8023ae74e92e67a7fe9651840857ca8229210c3c3e6c6e4e855221fafe36823a" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.650943 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.651261 4736 scope.go:117] "RemoveContainer" containerID="a9a51d42096cd417ea48f3ae1a8ec91320986b90813f073e061032c9ca97040f" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.652870 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-acl-logging/0.log" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653220 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-controller/0.log" Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653495 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" exitCode=0 Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653518 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" exitCode=143 Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653526 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" exitCode=143 Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653544 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653568 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} Feb 14 10:53:34 crc kubenswrapper[4736]: I0214 10:53:34.653577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} Feb 14 10:53:34 crc kubenswrapper[4736]: E0214 10:53:34.655426 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zm7d8_openshift-multus(db7224ab-d0ab-49e3-9154-4d9047057681)\"" pod="openshift-multus/multus-zm7d8" podUID="db7224ab-d0ab-49e3-9154-4d9047057681" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.380623 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.382956 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-acl-logging/0.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.383426 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-controller/0.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.383859 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410832 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2mr\" (UniqueName: \"kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410895 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410930 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.410975 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411010 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411049 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411067 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411090 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411108 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411130 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411163 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411185 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411211 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411243 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411267 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411284 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411307 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket\") pod \"4586e477-2198-4f75-aeba-0eaf894cde1a\" (UID: \"4586e477-2198-4f75-aeba-0eaf894cde1a\") " Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.411989 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412035 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412058 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412093 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412116 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log" (OuterVolumeSpecName: "node-log") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412141 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412163 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash" (OuterVolumeSpecName: "host-slash") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412124 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412363 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412386 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412443 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket" (OuterVolumeSpecName: "log-socket") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412475 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412479 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412565 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.412632 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.413526 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.418143 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.419814 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr" (OuterVolumeSpecName: "kube-api-access-hb2mr") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "kube-api-access-hb2mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.439244 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4586e477-2198-4f75-aeba-0eaf894cde1a" (UID: "4586e477-2198-4f75-aeba-0eaf894cde1a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458415 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6ngm"] Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458705 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458731 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458803 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458817 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458831 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-node" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458842 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-node" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458854 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458865 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458881 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458892 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458909 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458920 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458932 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458943 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458955 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kubecfg-setup" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458965 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kubecfg-setup" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.458978 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="sbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.458989 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="sbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.459004 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="northd" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459016 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="northd" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.459030 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="nbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459041 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="nbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.459056 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-acl-logging" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459066 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-acl-logging" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459224 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459240 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459253 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-node" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459267 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="sbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459280 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="nbdb" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459293 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459306 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="northd" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459324 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-acl-logging" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459340 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovn-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459355 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459370 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.459512 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459528 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.459717 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerName="ovnkube-controller" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.462086 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.512706 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-config\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.512802 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg68b\" (UniqueName: \"kubernetes.io/projected/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-kube-api-access-jg68b\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.512844 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.512884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovn-node-metrics-cert\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.512958 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-netd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513006 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-systemd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513072 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-env-overrides\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513096 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-etc-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513168 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-ovn\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513209 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-node-log\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513289 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-var-lib-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513403 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-script-lib\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513497 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-bin\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513542 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-kubelet\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513587 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-slash\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513624 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-systemd-units\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-netns\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513725 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-log-socket\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513805 4736 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513828 4736 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513847 4736 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513863 4736 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-log-socket\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513880 4736 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-slash\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513896 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb2mr\" (UniqueName: \"kubernetes.io/projected/4586e477-2198-4f75-aeba-0eaf894cde1a-kube-api-access-hb2mr\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513913 4736 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513925 4736 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513938 4736 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513950 4736 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513960 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513972 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4586e477-2198-4f75-aeba-0eaf894cde1a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513982 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.513993 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4586e477-2198-4f75-aeba-0eaf894cde1a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514004 4736 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514015 4736 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514026 4736 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514039 4736 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-node-log\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514052 4736 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.514066 4736 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4586e477-2198-4f75-aeba-0eaf894cde1a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614720 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovn-node-metrics-cert\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614797 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-netd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614817 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-systemd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614869 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-systemd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-env-overrides\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-etc-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-ovn\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614980 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.614995 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-node-log\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615016 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615032 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-var-lib-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615047 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-script-lib\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615062 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-bin\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-kubelet\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-slash\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615118 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-systemd-units\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615135 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-netns\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615150 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-log-socket\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-config\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615184 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg68b\" (UniqueName: \"kubernetes.io/projected/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-kube-api-access-jg68b\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615239 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-netd\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-cni-bin\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615618 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-etc-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615642 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-run-ovn\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615698 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-node-log\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-env-overrides\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615734 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-kubelet\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615767 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-var-lib-openvswitch\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615770 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-slash\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615790 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-systemd-units\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615810 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-host-run-netns\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.615830 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-log-socket\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.616259 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-config\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.616370 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovnkube-script-lib\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.618443 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-ovn-node-metrics-cert\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.635372 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg68b\" (UniqueName: \"kubernetes.io/projected/2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac-kube-api-access-jg68b\") pod \"ovnkube-node-l6ngm\" (UID: \"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.662080 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovnkube-controller/3.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.664990 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-acl-logging/0.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665465 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7vfr_4586e477-2198-4f75-aeba-0eaf894cde1a/ovn-controller/0.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665719 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" exitCode=0 Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665761 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" exitCode=0 Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665771 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" exitCode=0 Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665780 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" exitCode=0 Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665789 4736 generic.go:334] "Generic (PLEG): container finished" podID="4586e477-2198-4f75-aeba-0eaf894cde1a" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" exitCode=0 Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665812 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665832 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665877 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665897 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.665911 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7vfr" event={"ID":"4586e477-2198-4f75-aeba-0eaf894cde1a","Type":"ContainerDied","Data":"d053642a6ed154e453d0dfa8f89d464d885d431feade9da44c93834d67e67440"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666067 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666097 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666117 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666135 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666151 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666168 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666190 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666203 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.666218 4736 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.670264 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/2.log" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.690588 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.716062 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7vfr"] Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.719297 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7vfr"] Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.725586 4736 scope.go:117] "RemoveContainer" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.751615 4736 scope.go:117] "RemoveContainer" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.772332 4736 scope.go:117] "RemoveContainer" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.779171 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.786506 4736 scope.go:117] "RemoveContainer" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.800569 4736 scope.go:117] "RemoveContainer" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.821817 4736 scope.go:117] "RemoveContainer" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.839535 4736 scope.go:117] "RemoveContainer" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.859376 4736 scope.go:117] "RemoveContainer" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.878116 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.878834 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.879418 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} err="failed to get container status \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.879448 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.880235 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": container with ID starting with ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0 not found: ID does not exist" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.880315 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} err="failed to get container status \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": rpc error: code = NotFound desc = could not find container \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": container with ID starting with ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.880397 4736 scope.go:117] "RemoveContainer" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.880912 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": container with ID starting with 8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6 not found: ID does not exist" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.880982 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} err="failed to get container status \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": rpc error: code = NotFound desc = could not find container \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": container with ID starting with 8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.881022 4736 scope.go:117] "RemoveContainer" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.881425 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": container with ID starting with 6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31 not found: ID does not exist" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.881517 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} err="failed to get container status \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": rpc error: code = NotFound desc = could not find container \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": container with ID starting with 6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.881687 4736 scope.go:117] "RemoveContainer" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.882868 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": container with ID starting with bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80 not found: ID does not exist" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.882944 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} err="failed to get container status \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": rpc error: code = NotFound desc = could not find container \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": container with ID starting with bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.882968 4736 scope.go:117] "RemoveContainer" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.883329 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": container with ID starting with d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f not found: ID does not exist" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.883375 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} err="failed to get container status \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": rpc error: code = NotFound desc = could not find container \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": container with ID starting with d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.883402 4736 scope.go:117] "RemoveContainer" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.884824 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": container with ID starting with b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951 not found: ID does not exist" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.884890 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} err="failed to get container status \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": rpc error: code = NotFound desc = could not find container \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": container with ID starting with b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.884923 4736 scope.go:117] "RemoveContainer" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.886048 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": container with ID starting with 260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6 not found: ID does not exist" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.886085 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} err="failed to get container status \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": rpc error: code = NotFound desc = could not find container \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": container with ID starting with 260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.886110 4736 scope.go:117] "RemoveContainer" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.886726 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": container with ID starting with 4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b not found: ID does not exist" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.886790 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} err="failed to get container status \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": rpc error: code = NotFound desc = could not find container \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": container with ID starting with 4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.886820 4736 scope.go:117] "RemoveContainer" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: E0214 10:53:35.890533 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": container with ID starting with facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067 not found: ID does not exist" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.890574 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} err="failed to get container status \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": rpc error: code = NotFound desc = could not find container \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": container with ID starting with facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.890597 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.890973 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} err="failed to get container status \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.891006 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.891483 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} err="failed to get container status \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": rpc error: code = NotFound desc = could not find container \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": container with ID starting with ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.891504 4736 scope.go:117] "RemoveContainer" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.891969 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} err="failed to get container status \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": rpc error: code = NotFound desc = could not find container \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": container with ID starting with 8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.892046 4736 scope.go:117] "RemoveContainer" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.892488 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} err="failed to get container status \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": rpc error: code = NotFound desc = could not find container \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": container with ID starting with 6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.892525 4736 scope.go:117] "RemoveContainer" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.892890 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} err="failed to get container status \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": rpc error: code = NotFound desc = could not find container \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": container with ID starting with bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.892910 4736 scope.go:117] "RemoveContainer" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.893217 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} err="failed to get container status \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": rpc error: code = NotFound desc = could not find container \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": container with ID starting with d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.893235 4736 scope.go:117] "RemoveContainer" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.893714 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} err="failed to get container status \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": rpc error: code = NotFound desc = could not find container \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": container with ID starting with b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.893775 4736 scope.go:117] "RemoveContainer" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.894168 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} err="failed to get container status \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": rpc error: code = NotFound desc = could not find container \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": container with ID starting with 260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.894217 4736 scope.go:117] "RemoveContainer" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895145 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} err="failed to get container status \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": rpc error: code = NotFound desc = could not find container \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": container with ID starting with 4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895194 4736 scope.go:117] "RemoveContainer" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895544 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} err="failed to get container status \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": rpc error: code = NotFound desc = could not find container \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": container with ID starting with facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895568 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895955 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} err="failed to get container status \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.895994 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.896600 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} err="failed to get container status \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": rpc error: code = NotFound desc = could not find container \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": container with ID starting with ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.896628 4736 scope.go:117] "RemoveContainer" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.897260 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} err="failed to get container status \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": rpc error: code = NotFound desc = could not find container \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": container with ID starting with 8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.897288 4736 scope.go:117] "RemoveContainer" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.897613 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} err="failed to get container status \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": rpc error: code = NotFound desc = could not find container \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": container with ID starting with 6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.897637 4736 scope.go:117] "RemoveContainer" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.897900 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} err="failed to get container status \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": rpc error: code = NotFound desc = could not find container \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": container with ID starting with bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.898366 4736 scope.go:117] "RemoveContainer" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.901497 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} err="failed to get container status \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": rpc error: code = NotFound desc = could not find container \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": container with ID starting with d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.901525 4736 scope.go:117] "RemoveContainer" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902175 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} err="failed to get container status \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": rpc error: code = NotFound desc = could not find container \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": container with ID starting with b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902231 4736 scope.go:117] "RemoveContainer" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902620 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} err="failed to get container status \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": rpc error: code = NotFound desc = could not find container \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": container with ID starting with 260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902648 4736 scope.go:117] "RemoveContainer" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902903 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} err="failed to get container status \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": rpc error: code = NotFound desc = could not find container \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": container with ID starting with 4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.902942 4736 scope.go:117] "RemoveContainer" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903211 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} err="failed to get container status \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": rpc error: code = NotFound desc = could not find container \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": container with ID starting with facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903312 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903625 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} err="failed to get container status \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903656 4736 scope.go:117] "RemoveContainer" containerID="ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903967 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0"} err="failed to get container status \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": rpc error: code = NotFound desc = could not find container \"ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0\": container with ID starting with ad9ea90f6920996e4b0d574b3d86ddcd2b59b829b1c9320d759a9d314828f1f0 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.903994 4736 scope.go:117] "RemoveContainer" containerID="8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.904608 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6"} err="failed to get container status \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": rpc error: code = NotFound desc = could not find container \"8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6\": container with ID starting with 8067c8be3da20447fda46b2fb7e7c788a7c6995051343f54db5ce2c0e3d6cad6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.904657 4736 scope.go:117] "RemoveContainer" containerID="6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.905214 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31"} err="failed to get container status \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": rpc error: code = NotFound desc = could not find container \"6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31\": container with ID starting with 6c62bdbfac86ac347903f8503e67f8a4bfb4a385091094558d14a69fda008a31 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.905245 4736 scope.go:117] "RemoveContainer" containerID="bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.908375 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80"} err="failed to get container status \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": rpc error: code = NotFound desc = could not find container \"bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80\": container with ID starting with bcc1b67f0e7e4e584edda690e2228f09c729714a5575fa55ecdc7857e8cf1c80 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.908412 4736 scope.go:117] "RemoveContainer" containerID="d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.908816 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f"} err="failed to get container status \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": rpc error: code = NotFound desc = could not find container \"d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f\": container with ID starting with d608205ead81d01e300e81eac3fececf69410015b8e5e84d7837279fc6dfd94f not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.908841 4736 scope.go:117] "RemoveContainer" containerID="b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909165 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951"} err="failed to get container status \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": rpc error: code = NotFound desc = could not find container \"b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951\": container with ID starting with b0bbe69ad3cc791a06423073424dd4edff7e7646958765217c1ba9f3c479b951 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909205 4736 scope.go:117] "RemoveContainer" containerID="260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909600 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6"} err="failed to get container status \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": rpc error: code = NotFound desc = could not find container \"260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6\": container with ID starting with 260eb74b9d84165c823f3ef7697a4f0f3c93a9dbc84be80e8a0c81428f8871c6 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909635 4736 scope.go:117] "RemoveContainer" containerID="4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909886 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b"} err="failed to get container status \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": rpc error: code = NotFound desc = could not find container \"4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b\": container with ID starting with 4df0a1c85684205f511497fcba02d9329442f08ddfe4aa96301f8c59ec75bd0b not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.909919 4736 scope.go:117] "RemoveContainer" containerID="facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.910269 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067"} err="failed to get container status \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": rpc error: code = NotFound desc = could not find container \"facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067\": container with ID starting with facde56725ca513177751054f67089761bfac3f74291ad8fc7bc6d207f5ce067 not found: ID does not exist" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.910296 4736 scope.go:117] "RemoveContainer" containerID="92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89" Feb 14 10:53:35 crc kubenswrapper[4736]: I0214 10:53:35.910667 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89"} err="failed to get container status \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": rpc error: code = NotFound desc = could not find container \"92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89\": container with ID starting with 92dc832c14586473e9abc1b2f45d93a6af4131bf19ce8eb341f36bcf7c763f89 not found: ID does not exist" Feb 14 10:53:36 crc kubenswrapper[4736]: I0214 10:53:36.405272 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4586e477-2198-4f75-aeba-0eaf894cde1a" path="/var/lib/kubelet/pods/4586e477-2198-4f75-aeba-0eaf894cde1a/volumes" Feb 14 10:53:36 crc kubenswrapper[4736]: I0214 10:53:36.680812 4736 generic.go:334] "Generic (PLEG): container finished" podID="2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac" containerID="c5bf326205bb55e2f16f4ddd5888d438a564f2353d1350e2c0d5038298155d89" exitCode=0 Feb 14 10:53:36 crc kubenswrapper[4736]: I0214 10:53:36.680853 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerDied","Data":"c5bf326205bb55e2f16f4ddd5888d438a564f2353d1350e2c0d5038298155d89"} Feb 14 10:53:36 crc kubenswrapper[4736]: I0214 10:53:36.680928 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"22bfeb5e76ee949ee48e3abf8392a725785595d63aa3b013b81bf3ecc9b2f440"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.693219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"03ac769f49e68eb26df648aba1fb9b2aa1fe98bdbd474b69c0d7d208074b719e"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.693989 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"e22d942643aa97e0fdda43844625943825f521fdb9d29ccee8163189d17e33de"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.694015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"17506d39672aba65b65520d12bf4d896e7fbe4a75444d21445c5b67318081d58"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.694033 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"9a8293809c257bc49ae41f7a3bbb531530c258150cd7e7937d98ecdf1012ac3e"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.694052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"497cee99eed2453d8478c63030fc2af46b3a6c60519b66c6ea2c762473903381"} Feb 14 10:53:37 crc kubenswrapper[4736]: I0214 10:53:37.694068 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"bbe3bf907019cbd575458aae3634ea45fd7c41043e5eae56fd507a01e63905b7"} Feb 14 10:53:40 crc kubenswrapper[4736]: I0214 10:53:40.726013 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"d483f8d28bf5d13c57b842ee73d0a14c8b2c08fb8fda91bb63b20c97dd66b5fc"} Feb 14 10:53:42 crc kubenswrapper[4736]: I0214 10:53:42.742436 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" event={"ID":"2e47c1eb-fe4d-4c70-9460-f50f1bb1aeac","Type":"ContainerStarted","Data":"ca8a739ae68e99e0f548fccdd89b40a718dee5a16bd3d3ca7dcb6c6e12cb4b0e"} Feb 14 10:53:42 crc kubenswrapper[4736]: I0214 10:53:42.742888 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:42 crc kubenswrapper[4736]: I0214 10:53:42.742934 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:42 crc kubenswrapper[4736]: I0214 10:53:42.794652 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:42 crc kubenswrapper[4736]: I0214 10:53:42.817413 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" podStartSLOduration=7.817397697 podStartE2EDuration="7.817397697s" podCreationTimestamp="2026-02-14 10:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:53:42.812037181 +0000 UTC m=+733.180664559" watchObservedRunningTime="2026-02-14 10:53:42.817397697 +0000 UTC m=+733.186025065" Feb 14 10:53:43 crc kubenswrapper[4736]: I0214 10:53:43.749897 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:43 crc kubenswrapper[4736]: I0214 10:53:43.800154 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:53:47 crc kubenswrapper[4736]: I0214 10:53:47.397221 4736 scope.go:117] "RemoveContainer" containerID="a9a51d42096cd417ea48f3ae1a8ec91320986b90813f073e061032c9ca97040f" Feb 14 10:53:47 crc kubenswrapper[4736]: I0214 10:53:47.773868 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zm7d8_db7224ab-d0ab-49e3-9154-4d9047057681/kube-multus/2.log" Feb 14 10:53:47 crc kubenswrapper[4736]: I0214 10:53:47.774204 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zm7d8" event={"ID":"db7224ab-d0ab-49e3-9154-4d9047057681","Type":"ContainerStarted","Data":"1c7a11105de108da80cdd72fc77c3eac1c5fa219e70440e9c252d56845c91f1f"} Feb 14 10:54:05 crc kubenswrapper[4736]: I0214 10:54:05.816036 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6ngm" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.192095 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95"] Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.192972 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.195437 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.204550 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95"] Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.275474 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zw6\" (UniqueName: \"kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.275519 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.275549 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.376827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92zw6\" (UniqueName: \"kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.376865 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.376886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.377424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.377564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.397530 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92zw6\" (UniqueName: \"kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.508041 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:08 crc kubenswrapper[4736]: I0214 10:54:08.905251 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95"] Feb 14 10:54:08 crc kubenswrapper[4736]: W0214 10:54:08.913565 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf3f932a_49f7_44eb_a953_0be84900d37a.slice/crio-47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2 WatchSource:0}: Error finding container 47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2: Status 404 returned error can't find the container with id 47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2 Feb 14 10:54:09 crc kubenswrapper[4736]: I0214 10:54:09.905595 4736 generic.go:334] "Generic (PLEG): container finished" podID="af3f932a-49f7-44eb-a953-0be84900d37a" containerID="6639c99a1ee1b7cf46a4ab44afa70eb277a459b2f16483034fb702d71e22c683" exitCode=0 Feb 14 10:54:09 crc kubenswrapper[4736]: I0214 10:54:09.905797 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" event={"ID":"af3f932a-49f7-44eb-a953-0be84900d37a","Type":"ContainerDied","Data":"6639c99a1ee1b7cf46a4ab44afa70eb277a459b2f16483034fb702d71e22c683"} Feb 14 10:54:09 crc kubenswrapper[4736]: I0214 10:54:09.905885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" event={"ID":"af3f932a-49f7-44eb-a953-0be84900d37a","Type":"ContainerStarted","Data":"47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2"} Feb 14 10:54:11 crc kubenswrapper[4736]: I0214 10:54:11.920724 4736 generic.go:334] "Generic (PLEG): container finished" podID="af3f932a-49f7-44eb-a953-0be84900d37a" containerID="78b2cd0d989454bd8ab0389891802d6f2d85ee2a4f9469091b0032bd8ab4e0f5" exitCode=0 Feb 14 10:54:11 crc kubenswrapper[4736]: I0214 10:54:11.920862 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" event={"ID":"af3f932a-49f7-44eb-a953-0be84900d37a","Type":"ContainerDied","Data":"78b2cd0d989454bd8ab0389891802d6f2d85ee2a4f9469091b0032bd8ab4e0f5"} Feb 14 10:54:12 crc kubenswrapper[4736]: I0214 10:54:12.938463 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" event={"ID":"af3f932a-49f7-44eb-a953-0be84900d37a","Type":"ContainerDied","Data":"41dcc1d5d952a554e448a7bc589164f914a4b05fb8be5aa8fc152374e5c22d52"} Feb 14 10:54:12 crc kubenswrapper[4736]: I0214 10:54:12.938540 4736 generic.go:334] "Generic (PLEG): container finished" podID="af3f932a-49f7-44eb-a953-0be84900d37a" containerID="41dcc1d5d952a554e448a7bc589164f914a4b05fb8be5aa8fc152374e5c22d52" exitCode=0 Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.138921 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.245058 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92zw6\" (UniqueName: \"kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6\") pod \"af3f932a-49f7-44eb-a953-0be84900d37a\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.245140 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle\") pod \"af3f932a-49f7-44eb-a953-0be84900d37a\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.245166 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util\") pod \"af3f932a-49f7-44eb-a953-0be84900d37a\" (UID: \"af3f932a-49f7-44eb-a953-0be84900d37a\") " Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.246254 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle" (OuterVolumeSpecName: "bundle") pod "af3f932a-49f7-44eb-a953-0be84900d37a" (UID: "af3f932a-49f7-44eb-a953-0be84900d37a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.253473 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6" (OuterVolumeSpecName: "kube-api-access-92zw6") pod "af3f932a-49f7-44eb-a953-0be84900d37a" (UID: "af3f932a-49f7-44eb-a953-0be84900d37a"). InnerVolumeSpecName "kube-api-access-92zw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.266290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util" (OuterVolumeSpecName: "util") pod "af3f932a-49f7-44eb-a953-0be84900d37a" (UID: "af3f932a-49f7-44eb-a953-0be84900d37a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.346629 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92zw6\" (UniqueName: \"kubernetes.io/projected/af3f932a-49f7-44eb-a953-0be84900d37a-kube-api-access-92zw6\") on node \"crc\" DevicePath \"\"" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.346667 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.346680 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af3f932a-49f7-44eb-a953-0be84900d37a-util\") on node \"crc\" DevicePath \"\"" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.954589 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" event={"ID":"af3f932a-49f7-44eb-a953-0be84900d37a","Type":"ContainerDied","Data":"47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2"} Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.955856 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a56a6313673be8e1fa37a33e6c580269b8b99465ac16b61b30a4220e94afc2" Feb 14 10:54:14 crc kubenswrapper[4736]: I0214 10:54:14.955923 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95" Feb 14 10:54:17 crc kubenswrapper[4736]: I0214 10:54:17.696596 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:54:17 crc kubenswrapper[4736]: I0214 10:54:17.697016 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.736251 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8d5p5"] Feb 14 10:54:19 crc kubenswrapper[4736]: E0214 10:54:19.737356 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="pull" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.737448 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="pull" Feb 14 10:54:19 crc kubenswrapper[4736]: E0214 10:54:19.737526 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="util" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.737602 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="util" Feb 14 10:54:19 crc kubenswrapper[4736]: E0214 10:54:19.737683 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="extract" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.737801 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="extract" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.738008 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3f932a-49f7-44eb-a953-0be84900d37a" containerName="extract" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.738531 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.742491 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.743727 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-d8gct" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.743926 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.750634 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8d5p5"] Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.818059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwck\" (UniqueName: \"kubernetes.io/projected/f9f3eda0-51a6-4de7-86d2-7b68836bcb67-kube-api-access-ghwck\") pod \"nmstate-operator-694c9596b7-8d5p5\" (UID: \"f9f3eda0-51a6-4de7-86d2-7b68836bcb67\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.919183 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwck\" (UniqueName: \"kubernetes.io/projected/f9f3eda0-51a6-4de7-86d2-7b68836bcb67-kube-api-access-ghwck\") pod \"nmstate-operator-694c9596b7-8d5p5\" (UID: \"f9f3eda0-51a6-4de7-86d2-7b68836bcb67\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" Feb 14 10:54:19 crc kubenswrapper[4736]: I0214 10:54:19.936829 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwck\" (UniqueName: \"kubernetes.io/projected/f9f3eda0-51a6-4de7-86d2-7b68836bcb67-kube-api-access-ghwck\") pod \"nmstate-operator-694c9596b7-8d5p5\" (UID: \"f9f3eda0-51a6-4de7-86d2-7b68836bcb67\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" Feb 14 10:54:20 crc kubenswrapper[4736]: I0214 10:54:20.060422 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" Feb 14 10:54:20 crc kubenswrapper[4736]: I0214 10:54:20.260375 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8d5p5"] Feb 14 10:54:20 crc kubenswrapper[4736]: I0214 10:54:20.994531 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" event={"ID":"f9f3eda0-51a6-4de7-86d2-7b68836bcb67","Type":"ContainerStarted","Data":"61e34da164afa92fefd073f210de1a2e57d7b232401da27299b6a1062b3e245f"} Feb 14 10:54:23 crc kubenswrapper[4736]: I0214 10:54:23.006437 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" event={"ID":"f9f3eda0-51a6-4de7-86d2-7b68836bcb67","Type":"ContainerStarted","Data":"eba2ded881c4a61ccc4a4628f33eadd0fc5f49092e51886c37f3d6d58431f0c1"} Feb 14 10:54:23 crc kubenswrapper[4736]: I0214 10:54:23.024603 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-8d5p5" podStartSLOduration=1.674125463 podStartE2EDuration="4.024584999s" podCreationTimestamp="2026-02-14 10:54:19 +0000 UTC" firstStartedPulling="2026-02-14 10:54:20.270644271 +0000 UTC m=+770.639271629" lastFinishedPulling="2026-02-14 10:54:22.621103797 +0000 UTC m=+772.989731165" observedRunningTime="2026-02-14 10:54:23.022581591 +0000 UTC m=+773.391208989" watchObservedRunningTime="2026-02-14 10:54:23.024584999 +0000 UTC m=+773.393212377" Feb 14 10:54:27 crc kubenswrapper[4736]: I0214 10:54:27.268186 4736 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.362866 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-krg4j"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.364154 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.367334 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mckg4" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.409013 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-krg4j"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.440429 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tfkkw"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.441180 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.447232 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.448081 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.450567 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.464903 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539457 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88n9\" (UniqueName: \"kubernetes.io/projected/e04e8849-dd0e-4a3f-98e0-8925563c7145-kube-api-access-b88n9\") pod \"nmstate-metrics-58c85c668d-krg4j\" (UID: \"e04e8849-dd0e-4a3f-98e0-8925563c7145\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chnq\" (UniqueName: \"kubernetes.io/projected/1799375f-7713-43d7-a0b2-9c76efff7daf-kube-api-access-6chnq\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539549 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539633 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-dbus-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539655 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-nmstate-lock\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539686 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-ovs-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.539703 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdfd\" (UniqueName: \"kubernetes.io/projected/73130fe2-047e-44a9-986b-0734857df7a6-kube-api-access-5rdfd\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.604220 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.605107 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.606792 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.607589 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.608481 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2nz8r" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.608702 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640436 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-dbus-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640491 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-nmstate-lock\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640535 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-ovs-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640562 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdfd\" (UniqueName: \"kubernetes.io/projected/73130fe2-047e-44a9-986b-0734857df7a6-kube-api-access-5rdfd\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640609 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88n9\" (UniqueName: \"kubernetes.io/projected/e04e8849-dd0e-4a3f-98e0-8925563c7145-kube-api-access-b88n9\") pod \"nmstate-metrics-58c85c668d-krg4j\" (UID: \"e04e8849-dd0e-4a3f-98e0-8925563c7145\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chnq\" (UniqueName: \"kubernetes.io/projected/1799375f-7713-43d7-a0b2-9c76efff7daf-kube-api-access-6chnq\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640646 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640642 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-nmstate-lock\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640661 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-ovs-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: E0214 10:54:28.640757 4736 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 14 10:54:28 crc kubenswrapper[4736]: E0214 10:54:28.640803 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair podName:73130fe2-047e-44a9-986b-0734857df7a6 nodeName:}" failed. No retries permitted until 2026-02-14 10:54:29.14078852 +0000 UTC m=+779.509415888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair") pod "nmstate-webhook-866bcb46dc-2gjv9" (UID: "73130fe2-047e-44a9-986b-0734857df7a6") : secret "openshift-nmstate-webhook" not found Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.640876 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1799375f-7713-43d7-a0b2-9c76efff7daf-dbus-socket\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.662332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chnq\" (UniqueName: \"kubernetes.io/projected/1799375f-7713-43d7-a0b2-9c76efff7daf-kube-api-access-6chnq\") pod \"nmstate-handler-tfkkw\" (UID: \"1799375f-7713-43d7-a0b2-9c76efff7daf\") " pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.662736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdfd\" (UniqueName: \"kubernetes.io/projected/73130fe2-047e-44a9-986b-0734857df7a6-kube-api-access-5rdfd\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.664014 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88n9\" (UniqueName: \"kubernetes.io/projected/e04e8849-dd0e-4a3f-98e0-8925563c7145-kube-api-access-b88n9\") pod \"nmstate-metrics-58c85c668d-krg4j\" (UID: \"e04e8849-dd0e-4a3f-98e0-8925563c7145\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.731089 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.741796 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwq4\" (UniqueName: \"kubernetes.io/projected/e3223403-6c82-4af7-8a7a-902982281d8b-kube-api-access-nzwq4\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.742037 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.742123 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e3223403-6c82-4af7-8a7a-902982281d8b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.755872 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:28 crc kubenswrapper[4736]: W0214 10:54:28.790560 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1799375f_7713_43d7_a0b2_9c76efff7daf.slice/crio-a25e2c7d65fbbf449ef43dea5ecf9d62387cd2a926029d16dda8c5f15d1e9fd8 WatchSource:0}: Error finding container a25e2c7d65fbbf449ef43dea5ecf9d62387cd2a926029d16dda8c5f15d1e9fd8: Status 404 returned error can't find the container with id a25e2c7d65fbbf449ef43dea5ecf9d62387cd2a926029d16dda8c5f15d1e9fd8 Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.799705 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6ffd9bdb95-nkvk6"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.800279 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.815292 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ffd9bdb95-nkvk6"] Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.844822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwq4\" (UniqueName: \"kubernetes.io/projected/e3223403-6c82-4af7-8a7a-902982281d8b-kube-api-access-nzwq4\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.844884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.844912 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e3223403-6c82-4af7-8a7a-902982281d8b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.845702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e3223403-6c82-4af7-8a7a-902982281d8b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: E0214 10:54:28.846007 4736 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 14 10:54:28 crc kubenswrapper[4736]: E0214 10:54:28.846040 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert podName:e3223403-6c82-4af7-8a7a-902982281d8b nodeName:}" failed. No retries permitted until 2026-02-14 10:54:29.346029278 +0000 UTC m=+779.714656646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-989cb" (UID: "e3223403-6c82-4af7-8a7a-902982281d8b") : secret "plugin-serving-cert" not found Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.861274 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwq4\" (UniqueName: \"kubernetes.io/projected/e3223403-6c82-4af7-8a7a-902982281d8b-kube-api-access-nzwq4\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.946546 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-trusted-ca-bundle\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.946897 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.946939 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-oauth-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.946986 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlzh\" (UniqueName: \"kubernetes.io/projected/2eea7bcb-08c8-4d84-9a14-f63837398ae3-kube-api-access-bmlzh\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.947011 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-service-ca\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.947034 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.947138 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-oauth-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:28 crc kubenswrapper[4736]: I0214 10:54:28.993006 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-krg4j"] Feb 14 10:54:28 crc kubenswrapper[4736]: W0214 10:54:28.998142 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode04e8849_dd0e_4a3f_98e0_8925563c7145.slice/crio-f1b5703caeef43ad067769918994444acec1fd398db57bb24c728f9ce6c7e254 WatchSource:0}: Error finding container f1b5703caeef43ad067769918994444acec1fd398db57bb24c728f9ce6c7e254: Status 404 returned error can't find the container with id f1b5703caeef43ad067769918994444acec1fd398db57bb24c728f9ce6c7e254 Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.038572 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" event={"ID":"e04e8849-dd0e-4a3f-98e0-8925563c7145","Type":"ContainerStarted","Data":"f1b5703caeef43ad067769918994444acec1fd398db57bb24c728f9ce6c7e254"} Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.039516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tfkkw" event={"ID":"1799375f-7713-43d7-a0b2-9c76efff7daf","Type":"ContainerStarted","Data":"a25e2c7d65fbbf449ef43dea5ecf9d62387cd2a926029d16dda8c5f15d1e9fd8"} Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048196 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-service-ca\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048268 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-oauth-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048295 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-trusted-ca-bundle\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048368 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-oauth-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.048394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlzh\" (UniqueName: \"kubernetes.io/projected/2eea7bcb-08c8-4d84-9a14-f63837398ae3-kube-api-access-bmlzh\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.049266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-oauth-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.049354 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-trusted-ca-bundle\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.049394 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.049919 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eea7bcb-08c8-4d84-9a14-f63837398ae3-service-ca\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.052478 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-serving-cert\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.056230 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eea7bcb-08c8-4d84-9a14-f63837398ae3-console-oauth-config\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.062507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlzh\" (UniqueName: \"kubernetes.io/projected/2eea7bcb-08c8-4d84-9a14-f63837398ae3-kube-api-access-bmlzh\") pod \"console-6ffd9bdb95-nkvk6\" (UID: \"2eea7bcb-08c8-4d84-9a14-f63837398ae3\") " pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.131873 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.150200 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.154303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/73130fe2-047e-44a9-986b-0734857df7a6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-2gjv9\" (UID: \"73130fe2-047e-44a9-986b-0734857df7a6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.352534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.356720 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3223403-6c82-4af7-8a7a-902982281d8b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-989cb\" (UID: \"e3223403-6c82-4af7-8a7a-902982281d8b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.365170 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.515983 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.534680 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ffd9bdb95-nkvk6"] Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.722308 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb"] Feb 14 10:54:29 crc kubenswrapper[4736]: W0214 10:54:29.730636 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3223403_6c82_4af7_8a7a_902982281d8b.slice/crio-d629ca911934f65dbf8bdb44bff4031c8e998bb424b6fe02b467fb5f5776d6c1 WatchSource:0}: Error finding container d629ca911934f65dbf8bdb44bff4031c8e998bb424b6fe02b467fb5f5776d6c1: Status 404 returned error can't find the container with id d629ca911934f65dbf8bdb44bff4031c8e998bb424b6fe02b467fb5f5776d6c1 Feb 14 10:54:29 crc kubenswrapper[4736]: I0214 10:54:29.774293 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9"] Feb 14 10:54:30 crc kubenswrapper[4736]: I0214 10:54:30.047142 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" event={"ID":"73130fe2-047e-44a9-986b-0734857df7a6","Type":"ContainerStarted","Data":"98ede3ed11ab71bed18f4a54f773518062ceedb48352e1d21a8abce2748a7198"} Feb 14 10:54:30 crc kubenswrapper[4736]: I0214 10:54:30.049109 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" event={"ID":"e3223403-6c82-4af7-8a7a-902982281d8b","Type":"ContainerStarted","Data":"d629ca911934f65dbf8bdb44bff4031c8e998bb424b6fe02b467fb5f5776d6c1"} Feb 14 10:54:30 crc kubenswrapper[4736]: I0214 10:54:30.051624 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ffd9bdb95-nkvk6" event={"ID":"2eea7bcb-08c8-4d84-9a14-f63837398ae3","Type":"ContainerStarted","Data":"86f2d7c11103fe063644cd02723336c95c2f2aa79f94cd759e10d54e0775962b"} Feb 14 10:54:30 crc kubenswrapper[4736]: I0214 10:54:30.051658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ffd9bdb95-nkvk6" event={"ID":"2eea7bcb-08c8-4d84-9a14-f63837398ae3","Type":"ContainerStarted","Data":"dd24fa764b3b92e87f599fd42b83df41b0a2b4433052acce09d559d51c3981a3"} Feb 14 10:54:30 crc kubenswrapper[4736]: I0214 10:54:30.081174 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6ffd9bdb95-nkvk6" podStartSLOduration=2.081156717 podStartE2EDuration="2.081156717s" podCreationTimestamp="2026-02-14 10:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:54:30.076267087 +0000 UTC m=+780.444894465" watchObservedRunningTime="2026-02-14 10:54:30.081156717 +0000 UTC m=+780.449784085" Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.060806 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" event={"ID":"73130fe2-047e-44a9-986b-0734857df7a6","Type":"ContainerStarted","Data":"a635cfa333cf9fc488c3944fa8af247f27d10c888e15890863d1b7b3692b719a"} Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.061397 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.062289 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" event={"ID":"e04e8849-dd0e-4a3f-98e0-8925563c7145","Type":"ContainerStarted","Data":"73ef30d9c96fb6cc28d4f41548e0112889ec3b426ed446604511a5fc219eaa93"} Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.063299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tfkkw" event={"ID":"1799375f-7713-43d7-a0b2-9c76efff7daf","Type":"ContainerStarted","Data":"30c3abcdb975b2988d308d5c156b689c5aa3d03a90dcb4aef5e74392a7e7d604"} Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.063683 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.080403 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" podStartSLOduration=2.203192587 podStartE2EDuration="4.080383314s" podCreationTimestamp="2026-02-14 10:54:28 +0000 UTC" firstStartedPulling="2026-02-14 10:54:29.780661304 +0000 UTC m=+780.149288672" lastFinishedPulling="2026-02-14 10:54:31.657852011 +0000 UTC m=+782.026479399" observedRunningTime="2026-02-14 10:54:32.080045194 +0000 UTC m=+782.448672592" watchObservedRunningTime="2026-02-14 10:54:32.080383314 +0000 UTC m=+782.449010692" Feb 14 10:54:32 crc kubenswrapper[4736]: I0214 10:54:32.109464 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tfkkw" podStartSLOduration=1.273850092 podStartE2EDuration="4.109442455s" podCreationTimestamp="2026-02-14 10:54:28 +0000 UTC" firstStartedPulling="2026-02-14 10:54:28.796301036 +0000 UTC m=+779.164928404" lastFinishedPulling="2026-02-14 10:54:31.631893389 +0000 UTC m=+782.000520767" observedRunningTime="2026-02-14 10:54:32.099545192 +0000 UTC m=+782.468172560" watchObservedRunningTime="2026-02-14 10:54:32.109442455 +0000 UTC m=+782.478069823" Feb 14 10:54:34 crc kubenswrapper[4736]: I0214 10:54:34.090194 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" event={"ID":"e3223403-6c82-4af7-8a7a-902982281d8b","Type":"ContainerStarted","Data":"10f9fa9314c0292c93a7d645514d6873eae3e4feb92be6bd8450db44ce9728a7"} Feb 14 10:54:34 crc kubenswrapper[4736]: I0214 10:54:34.110969 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-989cb" podStartSLOduration=3.061693754 podStartE2EDuration="6.110949217s" podCreationTimestamp="2026-02-14 10:54:28 +0000 UTC" firstStartedPulling="2026-02-14 10:54:29.733039772 +0000 UTC m=+780.101667140" lastFinishedPulling="2026-02-14 10:54:32.782295235 +0000 UTC m=+783.150922603" observedRunningTime="2026-02-14 10:54:34.110056832 +0000 UTC m=+784.478684200" watchObservedRunningTime="2026-02-14 10:54:34.110949217 +0000 UTC m=+784.479576605" Feb 14 10:54:35 crc kubenswrapper[4736]: I0214 10:54:35.098788 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" event={"ID":"e04e8849-dd0e-4a3f-98e0-8925563c7145","Type":"ContainerStarted","Data":"8fb57d7488ae91c71952e67194493adca8ea4f6f1d807af7ad39c5c93ccffba9"} Feb 14 10:54:38 crc kubenswrapper[4736]: I0214 10:54:38.781027 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tfkkw" Feb 14 10:54:38 crc kubenswrapper[4736]: I0214 10:54:38.797098 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-krg4j" podStartSLOduration=5.2499639479999995 podStartE2EDuration="10.797080576s" podCreationTimestamp="2026-02-14 10:54:28 +0000 UTC" firstStartedPulling="2026-02-14 10:54:28.999481306 +0000 UTC m=+779.368108674" lastFinishedPulling="2026-02-14 10:54:34.546597934 +0000 UTC m=+784.915225302" observedRunningTime="2026-02-14 10:54:35.129993727 +0000 UTC m=+785.498621135" watchObservedRunningTime="2026-02-14 10:54:38.797080576 +0000 UTC m=+789.165707944" Feb 14 10:54:39 crc kubenswrapper[4736]: I0214 10:54:39.132605 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:39 crc kubenswrapper[4736]: I0214 10:54:39.132850 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:39 crc kubenswrapper[4736]: I0214 10:54:39.138790 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:40 crc kubenswrapper[4736]: I0214 10:54:40.135100 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6ffd9bdb95-nkvk6" Feb 14 10:54:40 crc kubenswrapper[4736]: I0214 10:54:40.201075 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:54:47 crc kubenswrapper[4736]: I0214 10:54:47.695251 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:54:47 crc kubenswrapper[4736]: I0214 10:54:47.695784 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:54:49 crc kubenswrapper[4736]: I0214 10:54:49.372507 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-2gjv9" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.259889 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r4f7j" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" containerID="cri-o://2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578" gracePeriod=15 Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.570525 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj"] Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.571937 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.574902 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.588397 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj"] Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.636512 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r4f7j_19ffdb45-8f94-48d2-93f8-b139825d4063/console/0.log" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.636568 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.724433 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.724474 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjtn\" (UniqueName: \"kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.724544 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825717 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825833 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xg66\" (UniqueName: \"kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825892 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825918 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825940 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825961 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.825982 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert\") pod \"19ffdb45-8f94-48d2-93f8-b139825d4063\" (UID: \"19ffdb45-8f94-48d2-93f8-b139825d4063\") " Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.826163 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.826201 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.826222 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjtn\" (UniqueName: \"kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827027 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827070 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca" (OuterVolumeSpecName: "service-ca") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827102 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config" (OuterVolumeSpecName: "console-config") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827233 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827288 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.827398 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.833104 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.833298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.833736 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66" (OuterVolumeSpecName: "kube-api-access-2xg66") pod "19ffdb45-8f94-48d2-93f8-b139825d4063" (UID: "19ffdb45-8f94-48d2-93f8-b139825d4063"). InnerVolumeSpecName "kube-api-access-2xg66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.847264 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjtn\" (UniqueName: \"kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.901716 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927829 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927867 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xg66\" (UniqueName: \"kubernetes.io/projected/19ffdb45-8f94-48d2-93f8-b139825d4063-kube-api-access-2xg66\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927880 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927892 4736 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927906 4736 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927916 4736 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19ffdb45-8f94-48d2-93f8-b139825d4063-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:05 crc kubenswrapper[4736]: I0214 10:55:05.927927 4736 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ffdb45-8f94-48d2-93f8-b139825d4063-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.142540 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj"] Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323454 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r4f7j_19ffdb45-8f94-48d2-93f8-b139825d4063/console/0.log" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323499 4736 generic.go:334] "Generic (PLEG): container finished" podID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerID="2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578" exitCode=2 Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r4f7j" event={"ID":"19ffdb45-8f94-48d2-93f8-b139825d4063","Type":"ContainerDied","Data":"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578"} Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323569 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r4f7j" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r4f7j" event={"ID":"19ffdb45-8f94-48d2-93f8-b139825d4063","Type":"ContainerDied","Data":"b5767418c8902edd803a86a85b49c26377a6d5ad7e019c9951f2288e46c940f7"} Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.323597 4736 scope.go:117] "RemoveContainer" containerID="2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.330207 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" event={"ID":"bdf58939-65c9-4c99-9116-99f56d96754f","Type":"ContainerStarted","Data":"eb7b6aadf0f9a767f81b6003d48ab95e9aac1ce1e89ab10b8a49eaa9f626e948"} Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.357440 4736 scope.go:117] "RemoveContainer" containerID="2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578" Feb 14 10:55:06 crc kubenswrapper[4736]: E0214 10:55:06.358416 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578\": container with ID starting with 2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578 not found: ID does not exist" containerID="2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.358465 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578"} err="failed to get container status \"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578\": rpc error: code = NotFound desc = could not find container \"2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578\": container with ID starting with 2638566deead5c38fb1783c3521832034383f2e57d3de3af7bc50fe2be61e578 not found: ID does not exist" Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.371405 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.375649 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r4f7j"] Feb 14 10:55:06 crc kubenswrapper[4736]: I0214 10:55:06.404177 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" path="/var/lib/kubelet/pods/19ffdb45-8f94-48d2-93f8-b139825d4063/volumes" Feb 14 10:55:07 crc kubenswrapper[4736]: I0214 10:55:07.337618 4736 generic.go:334] "Generic (PLEG): container finished" podID="bdf58939-65c9-4c99-9116-99f56d96754f" containerID="3a41cf9c6381a3ca6a42e28663533405aba397effd4da298d3e50a772ee8a8f7" exitCode=0 Feb 14 10:55:07 crc kubenswrapper[4736]: I0214 10:55:07.337699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" event={"ID":"bdf58939-65c9-4c99-9116-99f56d96754f","Type":"ContainerDied","Data":"3a41cf9c6381a3ca6a42e28663533405aba397effd4da298d3e50a772ee8a8f7"} Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.136338 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:09 crc kubenswrapper[4736]: E0214 10:55:09.137175 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.137202 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.137479 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ffdb45-8f94-48d2-93f8-b139825d4063" containerName="console" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.139311 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.156079 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.274725 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.274910 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7rk4\" (UniqueName: \"kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.274988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.351709 4736 generic.go:334] "Generic (PLEG): container finished" podID="bdf58939-65c9-4c99-9116-99f56d96754f" containerID="d75791e2c797611fdf2bcc30bc948cce77fe973b2445c972c6d0c3a71925bfa4" exitCode=0 Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.351790 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" event={"ID":"bdf58939-65c9-4c99-9116-99f56d96754f","Type":"ContainerDied","Data":"d75791e2c797611fdf2bcc30bc948cce77fe973b2445c972c6d0c3a71925bfa4"} Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.376703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7rk4\" (UniqueName: \"kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.376815 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.376920 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.377611 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.377658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.397831 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7rk4\" (UniqueName: \"kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4\") pod \"redhat-operators-b22dk\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.458397 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:09 crc kubenswrapper[4736]: I0214 10:55:09.668428 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:09 crc kubenswrapper[4736]: W0214 10:55:09.693512 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962a2a02_6cea_47fe_bda9_1fb3b4372706.slice/crio-79346d9b65f429cca74119bff27724977bb3b5abf0605dc63c1413f4e47c7eee WatchSource:0}: Error finding container 79346d9b65f429cca74119bff27724977bb3b5abf0605dc63c1413f4e47c7eee: Status 404 returned error can't find the container with id 79346d9b65f429cca74119bff27724977bb3b5abf0605dc63c1413f4e47c7eee Feb 14 10:55:10 crc kubenswrapper[4736]: I0214 10:55:10.358766 4736 generic.go:334] "Generic (PLEG): container finished" podID="bdf58939-65c9-4c99-9116-99f56d96754f" containerID="800a6059e0b3eeecfd3d74524013de75177b4f0fe61c75c1cf0b33a97236dcdc" exitCode=0 Feb 14 10:55:10 crc kubenswrapper[4736]: I0214 10:55:10.358838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" event={"ID":"bdf58939-65c9-4c99-9116-99f56d96754f","Type":"ContainerDied","Data":"800a6059e0b3eeecfd3d74524013de75177b4f0fe61c75c1cf0b33a97236dcdc"} Feb 14 10:55:10 crc kubenswrapper[4736]: I0214 10:55:10.360097 4736 generic.go:334] "Generic (PLEG): container finished" podID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerID="1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c" exitCode=0 Feb 14 10:55:10 crc kubenswrapper[4736]: I0214 10:55:10.360126 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerDied","Data":"1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c"} Feb 14 10:55:10 crc kubenswrapper[4736]: I0214 10:55:10.360162 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerStarted","Data":"79346d9b65f429cca74119bff27724977bb3b5abf0605dc63c1413f4e47c7eee"} Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.374704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerStarted","Data":"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72"} Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.648207 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.807821 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle\") pod \"bdf58939-65c9-4c99-9116-99f56d96754f\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.807927 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util\") pod \"bdf58939-65c9-4c99-9116-99f56d96754f\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.807972 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wjtn\" (UniqueName: \"kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn\") pod \"bdf58939-65c9-4c99-9116-99f56d96754f\" (UID: \"bdf58939-65c9-4c99-9116-99f56d96754f\") " Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.809488 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle" (OuterVolumeSpecName: "bundle") pod "bdf58939-65c9-4c99-9116-99f56d96754f" (UID: "bdf58939-65c9-4c99-9116-99f56d96754f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.818956 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn" (OuterVolumeSpecName: "kube-api-access-2wjtn") pod "bdf58939-65c9-4c99-9116-99f56d96754f" (UID: "bdf58939-65c9-4c99-9116-99f56d96754f"). InnerVolumeSpecName "kube-api-access-2wjtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.840646 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util" (OuterVolumeSpecName: "util") pod "bdf58939-65c9-4c99-9116-99f56d96754f" (UID: "bdf58939-65c9-4c99-9116-99f56d96754f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.910403 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-util\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.910465 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wjtn\" (UniqueName: \"kubernetes.io/projected/bdf58939-65c9-4c99-9116-99f56d96754f-kube-api-access-2wjtn\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:11 crc kubenswrapper[4736]: I0214 10:55:11.910479 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdf58939-65c9-4c99-9116-99f56d96754f-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:12 crc kubenswrapper[4736]: I0214 10:55:12.382859 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" Feb 14 10:55:12 crc kubenswrapper[4736]: I0214 10:55:12.382879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj" event={"ID":"bdf58939-65c9-4c99-9116-99f56d96754f","Type":"ContainerDied","Data":"eb7b6aadf0f9a767f81b6003d48ab95e9aac1ce1e89ab10b8a49eaa9f626e948"} Feb 14 10:55:12 crc kubenswrapper[4736]: I0214 10:55:12.382939 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7b6aadf0f9a767f81b6003d48ab95e9aac1ce1e89ab10b8a49eaa9f626e948" Feb 14 10:55:12 crc kubenswrapper[4736]: I0214 10:55:12.384555 4736 generic.go:334] "Generic (PLEG): container finished" podID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerID="92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72" exitCode=0 Feb 14 10:55:12 crc kubenswrapper[4736]: I0214 10:55:12.384594 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerDied","Data":"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72"} Feb 14 10:55:14 crc kubenswrapper[4736]: I0214 10:55:14.419284 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerStarted","Data":"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d"} Feb 14 10:55:14 crc kubenswrapper[4736]: I0214 10:55:14.426135 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b22dk" podStartSLOduration=1.975879279 podStartE2EDuration="5.426117637s" podCreationTimestamp="2026-02-14 10:55:09 +0000 UTC" firstStartedPulling="2026-02-14 10:55:10.361346907 +0000 UTC m=+820.729974275" lastFinishedPulling="2026-02-14 10:55:13.811585265 +0000 UTC m=+824.180212633" observedRunningTime="2026-02-14 10:55:14.419583841 +0000 UTC m=+824.788211209" watchObservedRunningTime="2026-02-14 10:55:14.426117637 +0000 UTC m=+824.794745005" Feb 14 10:55:17 crc kubenswrapper[4736]: I0214 10:55:17.696006 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:55:17 crc kubenswrapper[4736]: I0214 10:55:17.696342 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:55:17 crc kubenswrapper[4736]: I0214 10:55:17.696392 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:55:17 crc kubenswrapper[4736]: I0214 10:55:17.697010 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 10:55:17 crc kubenswrapper[4736]: I0214 10:55:17.697063 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e" gracePeriod=600 Feb 14 10:55:19 crc kubenswrapper[4736]: I0214 10:55:19.439834 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e" exitCode=0 Feb 14 10:55:19 crc kubenswrapper[4736]: I0214 10:55:19.439925 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e"} Feb 14 10:55:19 crc kubenswrapper[4736]: I0214 10:55:19.440669 4736 scope.go:117] "RemoveContainer" containerID="3d207ea0142334a7f5274ab321669d0403e70d9633dff4e2ac99690c497158f8" Feb 14 10:55:19 crc kubenswrapper[4736]: I0214 10:55:19.459513 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:19 crc kubenswrapper[4736]: I0214 10:55:19.460520 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:20 crc kubenswrapper[4736]: I0214 10:55:20.446147 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715"} Feb 14 10:55:20 crc kubenswrapper[4736]: I0214 10:55:20.506856 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b22dk" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="registry-server" probeResult="failure" output=< Feb 14 10:55:20 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 10:55:20 crc kubenswrapper[4736]: > Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.310408 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4"] Feb 14 10:55:21 crc kubenswrapper[4736]: E0214 10:55:21.310617 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="pull" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.310628 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="pull" Feb 14 10:55:21 crc kubenswrapper[4736]: E0214 10:55:21.310639 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="util" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.310645 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="util" Feb 14 10:55:21 crc kubenswrapper[4736]: E0214 10:55:21.310662 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="extract" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.310668 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="extract" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.310813 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf58939-65c9-4c99-9116-99f56d96754f" containerName="extract" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.311163 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.314604 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.314879 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-427tp" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.314987 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.315177 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.315472 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.332528 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4"] Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.423121 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-webhook-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.423185 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49tp\" (UniqueName: \"kubernetes.io/projected/b64209dc-83e7-4c67-920c-0e8d9369d823-kube-api-access-h49tp\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.423218 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-apiservice-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.524602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49tp\" (UniqueName: \"kubernetes.io/projected/b64209dc-83e7-4c67-920c-0e8d9369d823-kube-api-access-h49tp\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.525934 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-apiservice-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.526859 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-webhook-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.533682 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-apiservice-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.534365 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b64209dc-83e7-4c67-920c-0e8d9369d823-webhook-cert\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.552448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49tp\" (UniqueName: \"kubernetes.io/projected/b64209dc-83e7-4c67-920c-0e8d9369d823-kube-api-access-h49tp\") pod \"metallb-operator-controller-manager-575f5cbc8b-mg2p4\" (UID: \"b64209dc-83e7-4c67-920c-0e8d9369d823\") " pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.572152 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv"] Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.572799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.575661 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.575898 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.580502 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pwq25" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.625165 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.647783 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv"] Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.730132 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-apiservice-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.730331 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-webhook-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.730368 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d258r\" (UniqueName: \"kubernetes.io/projected/ab7820de-1649-428e-b823-d28364520352-kube-api-access-d258r\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.832081 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-apiservice-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.832123 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-webhook-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.832154 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d258r\" (UniqueName: \"kubernetes.io/projected/ab7820de-1649-428e-b823-d28364520352-kube-api-access-d258r\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.837291 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-apiservice-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.854380 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab7820de-1649-428e-b823-d28364520352-webhook-cert\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.861531 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d258r\" (UniqueName: \"kubernetes.io/projected/ab7820de-1649-428e-b823-d28364520352-kube-api-access-d258r\") pod \"metallb-operator-webhook-server-69fc489c64-2rjlv\" (UID: \"ab7820de-1649-428e-b823-d28364520352\") " pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.896887 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:21 crc kubenswrapper[4736]: I0214 10:55:21.932860 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4"] Feb 14 10:55:22 crc kubenswrapper[4736]: I0214 10:55:22.169379 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv"] Feb 14 10:55:22 crc kubenswrapper[4736]: W0214 10:55:22.171869 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7820de_1649_428e_b823_d28364520352.slice/crio-983d96b2e6e36c2cab04bc7313e528055201316341ac6fee7a5b6ee9a48b2e08 WatchSource:0}: Error finding container 983d96b2e6e36c2cab04bc7313e528055201316341ac6fee7a5b6ee9a48b2e08: Status 404 returned error can't find the container with id 983d96b2e6e36c2cab04bc7313e528055201316341ac6fee7a5b6ee9a48b2e08 Feb 14 10:55:22 crc kubenswrapper[4736]: I0214 10:55:22.456800 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" event={"ID":"b64209dc-83e7-4c67-920c-0e8d9369d823","Type":"ContainerStarted","Data":"7df1cf9799fc665c940fdf565e349acb14ded0fca1dd655e6bc6be74b3924c51"} Feb 14 10:55:22 crc kubenswrapper[4736]: I0214 10:55:22.458161 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" event={"ID":"ab7820de-1649-428e-b823-d28364520352","Type":"ContainerStarted","Data":"983d96b2e6e36c2cab04bc7313e528055201316341ac6fee7a5b6ee9a48b2e08"} Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.505630 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" event={"ID":"ab7820de-1649-428e-b823-d28364520352","Type":"ContainerStarted","Data":"179d551ec6959b23bcc1a610aba403e8de09d34d24a4c335aadf0673605983ef"} Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.506290 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.507074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" event={"ID":"b64209dc-83e7-4c67-920c-0e8d9369d823","Type":"ContainerStarted","Data":"8562b257d5c1e259b529091b010567b835f0eddc2a7e2669b7d945b1e0d36187"} Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.507518 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.542778 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" podStartSLOduration=1.595245292 podStartE2EDuration="8.542763895s" podCreationTimestamp="2026-02-14 10:55:21 +0000 UTC" firstStartedPulling="2026-02-14 10:55:22.174697167 +0000 UTC m=+832.543324535" lastFinishedPulling="2026-02-14 10:55:29.12221577 +0000 UTC m=+839.490843138" observedRunningTime="2026-02-14 10:55:29.539063769 +0000 UTC m=+839.907691148" watchObservedRunningTime="2026-02-14 10:55:29.542763895 +0000 UTC m=+839.911391263" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.566274 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.574072 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" podStartSLOduration=1.4158100820000001 podStartE2EDuration="8.57405566s" podCreationTimestamp="2026-02-14 10:55:21 +0000 UTC" firstStartedPulling="2026-02-14 10:55:21.961894662 +0000 UTC m=+832.330522030" lastFinishedPulling="2026-02-14 10:55:29.12014024 +0000 UTC m=+839.488767608" observedRunningTime="2026-02-14 10:55:29.568738338 +0000 UTC m=+839.937365696" watchObservedRunningTime="2026-02-14 10:55:29.57405566 +0000 UTC m=+839.942683018" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.639432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:29 crc kubenswrapper[4736]: I0214 10:55:29.832462 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:31 crc kubenswrapper[4736]: I0214 10:55:31.517028 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b22dk" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="registry-server" containerID="cri-o://90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d" gracePeriod=2 Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.220491 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.382610 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content\") pod \"962a2a02-6cea-47fe-bda9-1fb3b4372706\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.382690 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities\") pod \"962a2a02-6cea-47fe-bda9-1fb3b4372706\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.382719 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7rk4\" (UniqueName: \"kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4\") pod \"962a2a02-6cea-47fe-bda9-1fb3b4372706\" (UID: \"962a2a02-6cea-47fe-bda9-1fb3b4372706\") " Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.383695 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities" (OuterVolumeSpecName: "utilities") pod "962a2a02-6cea-47fe-bda9-1fb3b4372706" (UID: "962a2a02-6cea-47fe-bda9-1fb3b4372706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.393901 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4" (OuterVolumeSpecName: "kube-api-access-t7rk4") pod "962a2a02-6cea-47fe-bda9-1fb3b4372706" (UID: "962a2a02-6cea-47fe-bda9-1fb3b4372706"). InnerVolumeSpecName "kube-api-access-t7rk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.484646 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.484679 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7rk4\" (UniqueName: \"kubernetes.io/projected/962a2a02-6cea-47fe-bda9-1fb3b4372706-kube-api-access-t7rk4\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.516847 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "962a2a02-6cea-47fe-bda9-1fb3b4372706" (UID: "962a2a02-6cea-47fe-bda9-1fb3b4372706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.522892 4736 generic.go:334] "Generic (PLEG): container finished" podID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerID="90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d" exitCode=0 Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.522935 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b22dk" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.522936 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerDied","Data":"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d"} Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.523050 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b22dk" event={"ID":"962a2a02-6cea-47fe-bda9-1fb3b4372706","Type":"ContainerDied","Data":"79346d9b65f429cca74119bff27724977bb3b5abf0605dc63c1413f4e47c7eee"} Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.523069 4736 scope.go:117] "RemoveContainer" containerID="90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.544510 4736 scope.go:117] "RemoveContainer" containerID="92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.556339 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.560942 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b22dk"] Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.576287 4736 scope.go:117] "RemoveContainer" containerID="1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.586140 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962a2a02-6cea-47fe-bda9-1fb3b4372706-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.596998 4736 scope.go:117] "RemoveContainer" containerID="90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d" Feb 14 10:55:32 crc kubenswrapper[4736]: E0214 10:55:32.597426 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d\": container with ID starting with 90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d not found: ID does not exist" containerID="90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.597460 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d"} err="failed to get container status \"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d\": rpc error: code = NotFound desc = could not find container \"90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d\": container with ID starting with 90d41e8ec6931a7aeaa08620fe0c91618183cc5e0314966f36b4fcbef6e2912d not found: ID does not exist" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.597482 4736 scope.go:117] "RemoveContainer" containerID="92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72" Feb 14 10:55:32 crc kubenswrapper[4736]: E0214 10:55:32.597893 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72\": container with ID starting with 92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72 not found: ID does not exist" containerID="92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.598011 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72"} err="failed to get container status \"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72\": rpc error: code = NotFound desc = could not find container \"92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72\": container with ID starting with 92b2c72ae8928e9098e1c2f47dfafd3755ae3fffba0d553135f4f49ffe22fe72 not found: ID does not exist" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.598936 4736 scope.go:117] "RemoveContainer" containerID="1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c" Feb 14 10:55:32 crc kubenswrapper[4736]: E0214 10:55:32.599183 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c\": container with ID starting with 1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c not found: ID does not exist" containerID="1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c" Feb 14 10:55:32 crc kubenswrapper[4736]: I0214 10:55:32.599204 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c"} err="failed to get container status \"1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c\": rpc error: code = NotFound desc = could not find container \"1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c\": container with ID starting with 1c570513ff12764621ddc62215c7573d81592f078b4825cb119e313e5e93077c not found: ID does not exist" Feb 14 10:55:34 crc kubenswrapper[4736]: I0214 10:55:34.403915 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" path="/var/lib/kubelet/pods/962a2a02-6cea-47fe-bda9-1fb3b4372706/volumes" Feb 14 10:55:41 crc kubenswrapper[4736]: I0214 10:55:41.906238 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69fc489c64-2rjlv" Feb 14 10:56:01 crc kubenswrapper[4736]: I0214 10:56:01.629941 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-575f5cbc8b-mg2p4" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.422571 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wg9h9"] Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.422852 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="extract-utilities" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.422879 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="extract-utilities" Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.422895 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="registry-server" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.422903 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="registry-server" Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.422922 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="extract-content" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.422930 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="extract-content" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.423055 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="962a2a02-6cea-47fe-bda9-1fb3b4372706" containerName="registry-server" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.425385 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.428615 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s5r5r" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.428917 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.429519 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.449212 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn"] Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.450065 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.453722 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.470057 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn"] Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.553939 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tm7cx"] Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.554779 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.557049 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.557203 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ccr7q" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.559878 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.559980 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-kxmtf"] Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.560047 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.561089 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.563728 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.575274 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kxmtf"] Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587467 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-sockets\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587506 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78e00005-015f-402a-9308-473763478d28-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wff8x\" (UniqueName: \"kubernetes.io/projected/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-kube-api-access-wff8x\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587784 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-startup\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587811 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj8pn\" (UniqueName: \"kubernetes.io/projected/78e00005-015f-402a-9308-473763478d28-kube-api-access-sj8pn\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.587861 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.588328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-reloader\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.588483 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.588508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-conf\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690103 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7b23b424-9a4f-44c9-a999-7721acb1b135-metallb-excludel2\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690221 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.690371 4736 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.690428 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs podName:efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6 nodeName:}" failed. No retries permitted until 2026-02-14 10:56:03.19041109 +0000 UTC m=+873.559038448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs") pod "frr-k8s-wg9h9" (UID: "efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6") : secret "frr-k8s-certs-secret" not found Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690651 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-cert\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-reloader\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690717 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhfk\" (UniqueName: \"kubernetes.io/projected/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-kube-api-access-lvhfk\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690767 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690783 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-conf\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690800 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690818 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vkr\" (UniqueName: \"kubernetes.io/projected/7b23b424-9a4f-44c9-a999-7721acb1b135-kube-api-access-j9vkr\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-sockets\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690852 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-metrics-certs\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690867 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-metrics-certs\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78e00005-015f-402a-9308-473763478d28-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690901 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wff8x\" (UniqueName: \"kubernetes.io/projected/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-kube-api-access-wff8x\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690924 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-startup\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.690940 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj8pn\" (UniqueName: \"kubernetes.io/projected/78e00005-015f-402a-9308-473763478d28-kube-api-access-sj8pn\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.691431 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-reloader\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.691612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.691802 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-conf\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.692002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-sockets\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.693524 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-frr-startup\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.701493 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78e00005-015f-402a-9308-473763478d28-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.707946 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wff8x\" (UniqueName: \"kubernetes.io/projected/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-kube-api-access-wff8x\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.708515 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj8pn\" (UniqueName: \"kubernetes.io/projected/78e00005-015f-402a-9308-473763478d28-kube-api-access-sj8pn\") pod \"frr-k8s-webhook-server-78b44bf5bb-llcsn\" (UID: \"78e00005-015f-402a-9308-473763478d28\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.765107 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-cert\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhfk\" (UniqueName: \"kubernetes.io/projected/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-kube-api-access-lvhfk\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791887 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791911 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vkr\" (UniqueName: \"kubernetes.io/projected/7b23b424-9a4f-44c9-a999-7721acb1b135-kube-api-access-j9vkr\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791937 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-metrics-certs\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.791959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-metrics-certs\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.792012 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7b23b424-9a4f-44c9-a999-7721acb1b135-metallb-excludel2\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.792528 4736 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 10:56:02 crc kubenswrapper[4736]: E0214 10:56:02.792663 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist podName:7b23b424-9a4f-44c9-a999-7721acb1b135 nodeName:}" failed. No retries permitted until 2026-02-14 10:56:03.292631624 +0000 UTC m=+873.661258992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist") pod "speaker-tm7cx" (UID: "7b23b424-9a4f-44c9-a999-7721acb1b135") : secret "metallb-memberlist" not found Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.792832 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7b23b424-9a4f-44c9-a999-7721acb1b135-metallb-excludel2\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.797316 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.797987 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-metrics-certs\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.809899 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-metrics-certs\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.818254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-cert\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.822765 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhfk\" (UniqueName: \"kubernetes.io/projected/d4a413eb-d17a-4f7f-bd22-d4f41f915d53-kube-api-access-lvhfk\") pod \"controller-69bbfbf88f-kxmtf\" (UID: \"d4a413eb-d17a-4f7f-bd22-d4f41f915d53\") " pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.837886 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vkr\" (UniqueName: \"kubernetes.io/projected/7b23b424-9a4f-44c9-a999-7721acb1b135-kube-api-access-j9vkr\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:02 crc kubenswrapper[4736]: I0214 10:56:02.874719 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.106352 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kxmtf"] Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.204233 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.210428 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6-metrics-certs\") pod \"frr-k8s-wg9h9\" (UID: \"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6\") " pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.292374 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn"] Feb 14 10:56:03 crc kubenswrapper[4736]: W0214 10:56:03.300957 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e00005_015f_402a_9308_473763478d28.slice/crio-71497d90d1916f731a26104aba180936c7e5615ef63cf5d64b86ae2840f54a93 WatchSource:0}: Error finding container 71497d90d1916f731a26104aba180936c7e5615ef63cf5d64b86ae2840f54a93: Status 404 returned error can't find the container with id 71497d90d1916f731a26104aba180936c7e5615ef63cf5d64b86ae2840f54a93 Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.305950 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:03 crc kubenswrapper[4736]: E0214 10:56:03.306563 4736 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 10:56:03 crc kubenswrapper[4736]: E0214 10:56:03.306667 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist podName:7b23b424-9a4f-44c9-a999-7721acb1b135 nodeName:}" failed. No retries permitted until 2026-02-14 10:56:04.306638474 +0000 UTC m=+874.675265852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist") pod "speaker-tm7cx" (UID: "7b23b424-9a4f-44c9-a999-7721acb1b135") : secret "metallb-memberlist" not found Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.344030 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.809800 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"c51a7eb135d456960c77b8dcc1fc3ce4a50e8191b163355b3a40b1e5d990b50a"} Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.812838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kxmtf" event={"ID":"d4a413eb-d17a-4f7f-bd22-d4f41f915d53","Type":"ContainerStarted","Data":"d405f7797e9f03c4eb90108ad3fb36fd168204908d3e4bacc86962456eb9cd50"} Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.812867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kxmtf" event={"ID":"d4a413eb-d17a-4f7f-bd22-d4f41f915d53","Type":"ContainerStarted","Data":"a80d85b62a2272904e7f127c7b3442bff07e4e4ae643617a7ca4a6854d0c2a82"} Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.812881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kxmtf" event={"ID":"d4a413eb-d17a-4f7f-bd22-d4f41f915d53","Type":"ContainerStarted","Data":"ad07301603d8197bb7201a3763cd6c4de2622873a06c4408e154ea818855880b"} Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.813087 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.814106 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" event={"ID":"78e00005-015f-402a-9308-473763478d28","Type":"ContainerStarted","Data":"71497d90d1916f731a26104aba180936c7e5615ef63cf5d64b86ae2840f54a93"} Feb 14 10:56:03 crc kubenswrapper[4736]: I0214 10:56:03.833627 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-kxmtf" podStartSLOduration=1.8336065719999999 podStartE2EDuration="1.833606572s" podCreationTimestamp="2026-02-14 10:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:56:03.830002829 +0000 UTC m=+874.198630237" watchObservedRunningTime="2026-02-14 10:56:03.833606572 +0000 UTC m=+874.202233980" Feb 14 10:56:04 crc kubenswrapper[4736]: I0214 10:56:04.321653 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:04 crc kubenswrapper[4736]: I0214 10:56:04.335514 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7b23b424-9a4f-44c9-a999-7721acb1b135-memberlist\") pod \"speaker-tm7cx\" (UID: \"7b23b424-9a4f-44c9-a999-7721acb1b135\") " pod="metallb-system/speaker-tm7cx" Feb 14 10:56:04 crc kubenswrapper[4736]: I0214 10:56:04.370331 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tm7cx" Feb 14 10:56:04 crc kubenswrapper[4736]: W0214 10:56:04.404686 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b23b424_9a4f_44c9_a999_7721acb1b135.slice/crio-f55e0ee908b83ce1b11d10d12bed68d1171b4155cd89dfe3ce8448cf06632ae1 WatchSource:0}: Error finding container f55e0ee908b83ce1b11d10d12bed68d1171b4155cd89dfe3ce8448cf06632ae1: Status 404 returned error can't find the container with id f55e0ee908b83ce1b11d10d12bed68d1171b4155cd89dfe3ce8448cf06632ae1 Feb 14 10:56:04 crc kubenswrapper[4736]: I0214 10:56:04.830454 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tm7cx" event={"ID":"7b23b424-9a4f-44c9-a999-7721acb1b135","Type":"ContainerStarted","Data":"1430f4ff1b4d672d14e0a22b52e252deb065303f0395bd5935e1d66e4a043dfe"} Feb 14 10:56:04 crc kubenswrapper[4736]: I0214 10:56:04.830762 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tm7cx" event={"ID":"7b23b424-9a4f-44c9-a999-7721acb1b135","Type":"ContainerStarted","Data":"f55e0ee908b83ce1b11d10d12bed68d1171b4155cd89dfe3ce8448cf06632ae1"} Feb 14 10:56:05 crc kubenswrapper[4736]: I0214 10:56:05.838062 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tm7cx" event={"ID":"7b23b424-9a4f-44c9-a999-7721acb1b135","Type":"ContainerStarted","Data":"22fd1d30dd46e0a5e23182392146b59b130df87deed77ead3f8c9964cb0c3ac6"} Feb 14 10:56:05 crc kubenswrapper[4736]: I0214 10:56:05.838918 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tm7cx" Feb 14 10:56:05 crc kubenswrapper[4736]: I0214 10:56:05.860931 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tm7cx" podStartSLOduration=3.860915743 podStartE2EDuration="3.860915743s" podCreationTimestamp="2026-02-14 10:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:56:05.857105165 +0000 UTC m=+876.225732533" watchObservedRunningTime="2026-02-14 10:56:05.860915743 +0000 UTC m=+876.229543111" Feb 14 10:56:10 crc kubenswrapper[4736]: I0214 10:56:10.869230 4736 generic.go:334] "Generic (PLEG): container finished" podID="efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6" containerID="6a294729f127c268c508b308e00768d7b987b49bbff0300118a93e6769c7098f" exitCode=0 Feb 14 10:56:10 crc kubenswrapper[4736]: I0214 10:56:10.869344 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerDied","Data":"6a294729f127c268c508b308e00768d7b987b49bbff0300118a93e6769c7098f"} Feb 14 10:56:10 crc kubenswrapper[4736]: I0214 10:56:10.873064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" event={"ID":"78e00005-015f-402a-9308-473763478d28","Type":"ContainerStarted","Data":"6f6fbafc7c5abc3a0a67343a0f251489f578b74fba16cb77347b1fdc0279fcc4"} Feb 14 10:56:10 crc kubenswrapper[4736]: I0214 10:56:10.873561 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:11 crc kubenswrapper[4736]: I0214 10:56:11.885451 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerDied","Data":"5aa64b36a64e99402654e8f294f67cbac00de321e7a6bfb3166eba6506e4c88b"} Feb 14 10:56:11 crc kubenswrapper[4736]: I0214 10:56:11.885376 4736 generic.go:334] "Generic (PLEG): container finished" podID="efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6" containerID="5aa64b36a64e99402654e8f294f67cbac00de321e7a6bfb3166eba6506e4c88b" exitCode=0 Feb 14 10:56:11 crc kubenswrapper[4736]: I0214 10:56:11.935774 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" podStartSLOduration=2.638165494 podStartE2EDuration="9.935731783s" podCreationTimestamp="2026-02-14 10:56:02 +0000 UTC" firstStartedPulling="2026-02-14 10:56:03.308374263 +0000 UTC m=+873.677001641" lastFinishedPulling="2026-02-14 10:56:10.605940542 +0000 UTC m=+880.974567930" observedRunningTime="2026-02-14 10:56:10.911046248 +0000 UTC m=+881.279673626" watchObservedRunningTime="2026-02-14 10:56:11.935731783 +0000 UTC m=+882.304359161" Feb 14 10:56:12 crc kubenswrapper[4736]: I0214 10:56:12.896693 4736 generic.go:334] "Generic (PLEG): container finished" podID="efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6" containerID="5bd1e2620bb1cf6df74147f83b7595be588e90920c27e51993a3114a3b7929d5" exitCode=0 Feb 14 10:56:12 crc kubenswrapper[4736]: I0214 10:56:12.896820 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerDied","Data":"5bd1e2620bb1cf6df74147f83b7595be588e90920c27e51993a3114a3b7929d5"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913425 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"dcde053b1c2461fe0afef29e7e7ae905947240c9df5eba9ac6394452690c0fee"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913851 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913871 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"cf7aeeb1c17352d1e1e2c16b5e3b525fdcd48dbf9fdbbbd7f9405e24a8ef0cfb"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"2ea268eab5593a77cca12ca51d28e69a1951c6f507c189a5ee3d4fcf421fd78c"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"e0c096309d36ef91421f1032fd65e1f1f717932c7f424158d2e09df8d348398b"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913907 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"c017df0113e749c2afdfb4833b3ff64305d024978262ebb999ebd06af1d48629"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.913918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wg9h9" event={"ID":"efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6","Type":"ContainerStarted","Data":"7a7fffcf303073580746d7fad8afbf49daaa45ac1a21a966795634fed8e7bd58"} Feb 14 10:56:13 crc kubenswrapper[4736]: I0214 10:56:13.942239 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wg9h9" podStartSLOduration=4.8280667489999995 podStartE2EDuration="11.942208231s" podCreationTimestamp="2026-02-14 10:56:02 +0000 UTC" firstStartedPulling="2026-02-14 10:56:03.516621187 +0000 UTC m=+873.885248555" lastFinishedPulling="2026-02-14 10:56:10.630762659 +0000 UTC m=+880.999390037" observedRunningTime="2026-02-14 10:56:13.933033119 +0000 UTC m=+884.301660507" watchObservedRunningTime="2026-02-14 10:56:13.942208231 +0000 UTC m=+884.310835589" Feb 14 10:56:14 crc kubenswrapper[4736]: I0214 10:56:14.374410 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tm7cx" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.417648 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.418969 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.428031 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.428293 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.440150 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.440738 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d8vqp" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.518076 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cfjf\" (UniqueName: \"kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf\") pod \"openstack-operator-index-lt5gg\" (UID: \"32da2c10-d274-48f6-9cd8-4b0e787f6652\") " pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.619585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cfjf\" (UniqueName: \"kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf\") pod \"openstack-operator-index-lt5gg\" (UID: \"32da2c10-d274-48f6-9cd8-4b0e787f6652\") " pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.647540 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cfjf\" (UniqueName: \"kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf\") pod \"openstack-operator-index-lt5gg\" (UID: \"32da2c10-d274-48f6-9cd8-4b0e787f6652\") " pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:17 crc kubenswrapper[4736]: I0214 10:56:17.743676 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:18 crc kubenswrapper[4736]: I0214 10:56:18.152764 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:18 crc kubenswrapper[4736]: I0214 10:56:18.345077 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:18 crc kubenswrapper[4736]: I0214 10:56:18.392236 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:18 crc kubenswrapper[4736]: I0214 10:56:18.948368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lt5gg" event={"ID":"32da2c10-d274-48f6-9cd8-4b0e787f6652","Type":"ContainerStarted","Data":"18325c731acbd2afacc3d769f6204d948b621abd5e598405f63c03ebd7f71e31"} Feb 14 10:56:20 crc kubenswrapper[4736]: I0214 10:56:20.774343 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.387723 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4hxtr"] Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.397188 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.398248 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4hxtr"] Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.503070 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7ncc\" (UniqueName: \"kubernetes.io/projected/6d54b312-9619-450b-a6b2-980caae9860e-kube-api-access-l7ncc\") pod \"openstack-operator-index-4hxtr\" (UID: \"6d54b312-9619-450b-a6b2-980caae9860e\") " pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.604429 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7ncc\" (UniqueName: \"kubernetes.io/projected/6d54b312-9619-450b-a6b2-980caae9860e-kube-api-access-l7ncc\") pod \"openstack-operator-index-4hxtr\" (UID: \"6d54b312-9619-450b-a6b2-980caae9860e\") " pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.625290 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7ncc\" (UniqueName: \"kubernetes.io/projected/6d54b312-9619-450b-a6b2-980caae9860e-kube-api-access-l7ncc\") pod \"openstack-operator-index-4hxtr\" (UID: \"6d54b312-9619-450b-a6b2-980caae9860e\") " pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.719156 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.929955 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4hxtr"] Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.971014 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lt5gg" event={"ID":"32da2c10-d274-48f6-9cd8-4b0e787f6652","Type":"ContainerStarted","Data":"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985"} Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.971169 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-lt5gg" podUID="32da2c10-d274-48f6-9cd8-4b0e787f6652" containerName="registry-server" containerID="cri-o://99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985" gracePeriod=2 Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.974461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4hxtr" event={"ID":"6d54b312-9619-450b-a6b2-980caae9860e","Type":"ContainerStarted","Data":"42db3daf0c4ab7a40fa3a1c1f63fb8184e13ff64e229bc7eceac12ac2300d305"} Feb 14 10:56:21 crc kubenswrapper[4736]: I0214 10:56:21.988588 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lt5gg" podStartSLOduration=2.125592004 podStartE2EDuration="4.988567413s" podCreationTimestamp="2026-02-14 10:56:17 +0000 UTC" firstStartedPulling="2026-02-14 10:56:18.16091667 +0000 UTC m=+888.529544048" lastFinishedPulling="2026-02-14 10:56:21.023892089 +0000 UTC m=+891.392519457" observedRunningTime="2026-02-14 10:56:21.986254507 +0000 UTC m=+892.354881865" watchObservedRunningTime="2026-02-14 10:56:21.988567413 +0000 UTC m=+892.357194781" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.373449 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.517368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cfjf\" (UniqueName: \"kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf\") pod \"32da2c10-d274-48f6-9cd8-4b0e787f6652\" (UID: \"32da2c10-d274-48f6-9cd8-4b0e787f6652\") " Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.525252 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf" (OuterVolumeSpecName: "kube-api-access-5cfjf") pod "32da2c10-d274-48f6-9cd8-4b0e787f6652" (UID: "32da2c10-d274-48f6-9cd8-4b0e787f6652"). InnerVolumeSpecName "kube-api-access-5cfjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.622619 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cfjf\" (UniqueName: \"kubernetes.io/projected/32da2c10-d274-48f6-9cd8-4b0e787f6652-kube-api-access-5cfjf\") on node \"crc\" DevicePath \"\"" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.770910 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llcsn" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.879972 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-kxmtf" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.983820 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4hxtr" event={"ID":"6d54b312-9619-450b-a6b2-980caae9860e","Type":"ContainerStarted","Data":"dc2e0d4c24aeb2bf8b82d7ca674d1eb9722818364b481216cb23e3109547be95"} Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.986905 4736 generic.go:334] "Generic (PLEG): container finished" podID="32da2c10-d274-48f6-9cd8-4b0e787f6652" containerID="99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985" exitCode=0 Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.986933 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lt5gg" Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.986979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lt5gg" event={"ID":"32da2c10-d274-48f6-9cd8-4b0e787f6652","Type":"ContainerDied","Data":"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985"} Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.987074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lt5gg" event={"ID":"32da2c10-d274-48f6-9cd8-4b0e787f6652","Type":"ContainerDied","Data":"18325c731acbd2afacc3d769f6204d948b621abd5e598405f63c03ebd7f71e31"} Feb 14 10:56:22 crc kubenswrapper[4736]: I0214 10:56:22.987104 4736 scope.go:117] "RemoveContainer" containerID="99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985" Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.019051 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4hxtr" podStartSLOduration=1.955890393 podStartE2EDuration="2.019009342s" podCreationTimestamp="2026-02-14 10:56:21 +0000 UTC" firstStartedPulling="2026-02-14 10:56:21.944243359 +0000 UTC m=+892.312870727" lastFinishedPulling="2026-02-14 10:56:22.007362298 +0000 UTC m=+892.375989676" observedRunningTime="2026-02-14 10:56:23.006785854 +0000 UTC m=+893.375413252" watchObservedRunningTime="2026-02-14 10:56:23.019009342 +0000 UTC m=+893.387636720" Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.019990 4736 scope.go:117] "RemoveContainer" containerID="99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985" Feb 14 10:56:23 crc kubenswrapper[4736]: E0214 10:56:23.021789 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985\": container with ID starting with 99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985 not found: ID does not exist" containerID="99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985" Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.021835 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985"} err="failed to get container status \"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985\": rpc error: code = NotFound desc = could not find container \"99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985\": container with ID starting with 99321ae365e2ee7116081307ac88767bd4838f8dd1d917bfb6ab3a97b090f985 not found: ID does not exist" Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.041628 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.046439 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-lt5gg"] Feb 14 10:56:23 crc kubenswrapper[4736]: I0214 10:56:23.346513 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wg9h9" Feb 14 10:56:24 crc kubenswrapper[4736]: I0214 10:56:24.408660 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32da2c10-d274-48f6-9cd8-4b0e787f6652" path="/var/lib/kubelet/pods/32da2c10-d274-48f6-9cd8-4b0e787f6652/volumes" Feb 14 10:56:31 crc kubenswrapper[4736]: I0214 10:56:31.719796 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:31 crc kubenswrapper[4736]: I0214 10:56:31.721836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:31 crc kubenswrapper[4736]: I0214 10:56:31.753524 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:32 crc kubenswrapper[4736]: I0214 10:56:32.101328 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4hxtr" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.811563 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp"] Feb 14 10:56:38 crc kubenswrapper[4736]: E0214 10:56:38.812220 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32da2c10-d274-48f6-9cd8-4b0e787f6652" containerName="registry-server" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.812237 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="32da2c10-d274-48f6-9cd8-4b0e787f6652" containerName="registry-server" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.812379 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="32da2c10-d274-48f6-9cd8-4b0e787f6652" containerName="registry-server" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.813561 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.816959 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qw77p" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.834108 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp"] Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.942327 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.942470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:38 crc kubenswrapper[4736]: I0214 10:56:38.942623 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9c6r\" (UniqueName: \"kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.044199 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9c6r\" (UniqueName: \"kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.044343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.044408 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.045082 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.045156 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.081077 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9c6r\" (UniqueName: \"kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r\") pod \"7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.151524 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:39 crc kubenswrapper[4736]: I0214 10:56:39.604259 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp"] Feb 14 10:56:40 crc kubenswrapper[4736]: I0214 10:56:40.125270 4736 generic.go:334] "Generic (PLEG): container finished" podID="be6a93c1-1984-4663-b989-684667f31ec9" containerID="759eea154f320f655dabafd357477ac0d52227e6ca6a69b6baed6a56b3374e3b" exitCode=0 Feb 14 10:56:40 crc kubenswrapper[4736]: I0214 10:56:40.125318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" event={"ID":"be6a93c1-1984-4663-b989-684667f31ec9","Type":"ContainerDied","Data":"759eea154f320f655dabafd357477ac0d52227e6ca6a69b6baed6a56b3374e3b"} Feb 14 10:56:40 crc kubenswrapper[4736]: I0214 10:56:40.125342 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" event={"ID":"be6a93c1-1984-4663-b989-684667f31ec9","Type":"ContainerStarted","Data":"32e5a376f011b7fc6da75d0f0a90e9cbab30efd0f066da30b50a24e8ccd47ef7"} Feb 14 10:56:41 crc kubenswrapper[4736]: I0214 10:56:41.137327 4736 generic.go:334] "Generic (PLEG): container finished" podID="be6a93c1-1984-4663-b989-684667f31ec9" containerID="280d48141941dc88c338cd2318ac7b57acbfc309e102029b38618ce7daf69924" exitCode=0 Feb 14 10:56:41 crc kubenswrapper[4736]: I0214 10:56:41.137423 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" event={"ID":"be6a93c1-1984-4663-b989-684667f31ec9","Type":"ContainerDied","Data":"280d48141941dc88c338cd2318ac7b57acbfc309e102029b38618ce7daf69924"} Feb 14 10:56:42 crc kubenswrapper[4736]: I0214 10:56:42.155341 4736 generic.go:334] "Generic (PLEG): container finished" podID="be6a93c1-1984-4663-b989-684667f31ec9" containerID="5d7ddb6aa2c69ea884400837e4c561976ef057dc2ad48e82515d2094222fe622" exitCode=0 Feb 14 10:56:42 crc kubenswrapper[4736]: I0214 10:56:42.155419 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" event={"ID":"be6a93c1-1984-4663-b989-684667f31ec9","Type":"ContainerDied","Data":"5d7ddb6aa2c69ea884400837e4c561976ef057dc2ad48e82515d2094222fe622"} Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.440577 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.609807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9c6r\" (UniqueName: \"kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r\") pod \"be6a93c1-1984-4663-b989-684667f31ec9\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.609850 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle\") pod \"be6a93c1-1984-4663-b989-684667f31ec9\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.609873 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util\") pod \"be6a93c1-1984-4663-b989-684667f31ec9\" (UID: \"be6a93c1-1984-4663-b989-684667f31ec9\") " Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.611338 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle" (OuterVolumeSpecName: "bundle") pod "be6a93c1-1984-4663-b989-684667f31ec9" (UID: "be6a93c1-1984-4663-b989-684667f31ec9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.616732 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r" (OuterVolumeSpecName: "kube-api-access-h9c6r") pod "be6a93c1-1984-4663-b989-684667f31ec9" (UID: "be6a93c1-1984-4663-b989-684667f31ec9"). InnerVolumeSpecName "kube-api-access-h9c6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.623905 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util" (OuterVolumeSpecName: "util") pod "be6a93c1-1984-4663-b989-684667f31ec9" (UID: "be6a93c1-1984-4663-b989-684667f31ec9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.711634 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9c6r\" (UniqueName: \"kubernetes.io/projected/be6a93c1-1984-4663-b989-684667f31ec9-kube-api-access-h9c6r\") on node \"crc\" DevicePath \"\"" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.711675 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:56:43 crc kubenswrapper[4736]: I0214 10:56:43.711717 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be6a93c1-1984-4663-b989-684667f31ec9-util\") on node \"crc\" DevicePath \"\"" Feb 14 10:56:44 crc kubenswrapper[4736]: I0214 10:56:44.173711 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" event={"ID":"be6a93c1-1984-4663-b989-684667f31ec9","Type":"ContainerDied","Data":"32e5a376f011b7fc6da75d0f0a90e9cbab30efd0f066da30b50a24e8ccd47ef7"} Feb 14 10:56:44 crc kubenswrapper[4736]: I0214 10:56:44.174025 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e5a376f011b7fc6da75d0f0a90e9cbab30efd0f066da30b50a24e8ccd47ef7" Feb 14 10:56:44 crc kubenswrapper[4736]: I0214 10:56:44.173834 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.446163 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg"] Feb 14 10:56:51 crc kubenswrapper[4736]: E0214 10:56:51.446727 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="util" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.446764 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="util" Feb 14 10:56:51 crc kubenswrapper[4736]: E0214 10:56:51.446788 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="pull" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.446798 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="pull" Feb 14 10:56:51 crc kubenswrapper[4736]: E0214 10:56:51.446812 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="extract" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.446822 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="extract" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.446967 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6a93c1-1984-4663-b989-684667f31ec9" containerName="extract" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.447459 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.450756 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9r7d4" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.481809 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg"] Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.627279 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5vvb\" (UniqueName: \"kubernetes.io/projected/b74fa186-0772-4d4e-abcd-b04bc6fa4751-kube-api-access-p5vvb\") pod \"openstack-operator-controller-init-69b468cbcf-657fg\" (UID: \"b74fa186-0772-4d4e-abcd-b04bc6fa4751\") " pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.729076 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5vvb\" (UniqueName: \"kubernetes.io/projected/b74fa186-0772-4d4e-abcd-b04bc6fa4751-kube-api-access-p5vvb\") pod \"openstack-operator-controller-init-69b468cbcf-657fg\" (UID: \"b74fa186-0772-4d4e-abcd-b04bc6fa4751\") " pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.749386 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5vvb\" (UniqueName: \"kubernetes.io/projected/b74fa186-0772-4d4e-abcd-b04bc6fa4751-kube-api-access-p5vvb\") pod \"openstack-operator-controller-init-69b468cbcf-657fg\" (UID: \"b74fa186-0772-4d4e-abcd-b04bc6fa4751\") " pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:51 crc kubenswrapper[4736]: I0214 10:56:51.763099 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:52 crc kubenswrapper[4736]: I0214 10:56:52.001383 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg"] Feb 14 10:56:52 crc kubenswrapper[4736]: I0214 10:56:52.248545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" event={"ID":"b74fa186-0772-4d4e-abcd-b04bc6fa4751","Type":"ContainerStarted","Data":"06f6e106982d9938aecaac45c34479a557b9f4c60523b5906acc6ef6e6dcfab0"} Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.280321 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.281737 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.298425 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.477759 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xk6h\" (UniqueName: \"kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.478110 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.478345 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.579109 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xk6h\" (UniqueName: \"kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.579181 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.579232 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.579600 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.579651 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.608573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xk6h\" (UniqueName: \"kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h\") pod \"certified-operators-ffgmk\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:54 crc kubenswrapper[4736]: I0214 10:56:54.615894 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:56:56 crc kubenswrapper[4736]: I0214 10:56:56.744950 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:56:56 crc kubenswrapper[4736]: W0214 10:56:56.760068 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93985c21_680e_42a2_9f26_b24a18788d9e.slice/crio-2464917637d1b7b51029aa7fa1716253ca1473f4efce622ae6f868335ea18c7f WatchSource:0}: Error finding container 2464917637d1b7b51029aa7fa1716253ca1473f4efce622ae6f868335ea18c7f: Status 404 returned error can't find the container with id 2464917637d1b7b51029aa7fa1716253ca1473f4efce622ae6f868335ea18c7f Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.276914 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" event={"ID":"b74fa186-0772-4d4e-abcd-b04bc6fa4751","Type":"ContainerStarted","Data":"7b9dff4893506e1089cd8cc1a60e9da4b91d0365e94be39f7374ff095b811679"} Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.277324 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.278722 4736 generic.go:334] "Generic (PLEG): container finished" podID="93985c21-680e-42a2-9f26-b24a18788d9e" containerID="c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6" exitCode=0 Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.278781 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerDied","Data":"c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6"} Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.278809 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerStarted","Data":"2464917637d1b7b51029aa7fa1716253ca1473f4efce622ae6f868335ea18c7f"} Feb 14 10:56:57 crc kubenswrapper[4736]: I0214 10:56:57.358494 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" podStartSLOduration=1.64228898 podStartE2EDuration="6.358472681s" podCreationTimestamp="2026-02-14 10:56:51 +0000 UTC" firstStartedPulling="2026-02-14 10:56:52.01486033 +0000 UTC m=+922.383487698" lastFinishedPulling="2026-02-14 10:56:56.731044031 +0000 UTC m=+927.099671399" observedRunningTime="2026-02-14 10:56:57.332071297 +0000 UTC m=+927.700698705" watchObservedRunningTime="2026-02-14 10:56:57.358472681 +0000 UTC m=+927.727100069" Feb 14 10:56:58 crc kubenswrapper[4736]: I0214 10:56:58.287616 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerStarted","Data":"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22"} Feb 14 10:56:59 crc kubenswrapper[4736]: I0214 10:56:59.295317 4736 generic.go:334] "Generic (PLEG): container finished" podID="93985c21-680e-42a2-9f26-b24a18788d9e" containerID="ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22" exitCode=0 Feb 14 10:56:59 crc kubenswrapper[4736]: I0214 10:56:59.295405 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerDied","Data":"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22"} Feb 14 10:57:00 crc kubenswrapper[4736]: I0214 10:57:00.307680 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerStarted","Data":"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67"} Feb 14 10:57:00 crc kubenswrapper[4736]: I0214 10:57:00.327891 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffgmk" podStartSLOduration=3.919810725 podStartE2EDuration="6.327876368s" podCreationTimestamp="2026-02-14 10:56:54 +0000 UTC" firstStartedPulling="2026-02-14 10:56:57.280539386 +0000 UTC m=+927.649166744" lastFinishedPulling="2026-02-14 10:56:59.688605019 +0000 UTC m=+930.057232387" observedRunningTime="2026-02-14 10:57:00.323973627 +0000 UTC m=+930.692600995" watchObservedRunningTime="2026-02-14 10:57:00.327876368 +0000 UTC m=+930.696503736" Feb 14 10:57:01 crc kubenswrapper[4736]: I0214 10:57:01.766836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-69b468cbcf-657fg" Feb 14 10:57:04 crc kubenswrapper[4736]: I0214 10:57:04.617676 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:04 crc kubenswrapper[4736]: I0214 10:57:04.618012 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:04 crc kubenswrapper[4736]: I0214 10:57:04.674909 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:05 crc kubenswrapper[4736]: I0214 10:57:05.373667 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.881234 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.882928 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.898877 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.963208 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.963327 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:06 crc kubenswrapper[4736]: I0214 10:57:06.963433 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c99q\" (UniqueName: \"kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.064616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.064684 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c99q\" (UniqueName: \"kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.064707 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.065170 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.065178 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.083011 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c99q\" (UniqueName: \"kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q\") pod \"community-operators-28ddb\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.201926 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:07 crc kubenswrapper[4736]: I0214 10:57:07.533842 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.276873 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.277297 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffgmk" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="registry-server" containerID="cri-o://76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67" gracePeriod=2 Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.353082 4736 generic.go:334] "Generic (PLEG): container finished" podID="49d73488-f402-4310-89f4-99bdc6205893" containerID="c225ae7a7179fa0e8473ac85aad1d927bd8bee28f723a2527ca02cd80497a080" exitCode=0 Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.353122 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerDied","Data":"c225ae7a7179fa0e8473ac85aad1d927bd8bee28f723a2527ca02cd80497a080"} Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.353144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerStarted","Data":"9aaeb17e3dfb6a00f70be00e8e327e4d80768edc36e83255df07b2924440797f"} Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.648928 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.788330 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content\") pod \"93985c21-680e-42a2-9f26-b24a18788d9e\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.788543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xk6h\" (UniqueName: \"kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h\") pod \"93985c21-680e-42a2-9f26-b24a18788d9e\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.788613 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities\") pod \"93985c21-680e-42a2-9f26-b24a18788d9e\" (UID: \"93985c21-680e-42a2-9f26-b24a18788d9e\") " Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.790063 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities" (OuterVolumeSpecName: "utilities") pod "93985c21-680e-42a2-9f26-b24a18788d9e" (UID: "93985c21-680e-42a2-9f26-b24a18788d9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.810053 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h" (OuterVolumeSpecName: "kube-api-access-9xk6h") pod "93985c21-680e-42a2-9f26-b24a18788d9e" (UID: "93985c21-680e-42a2-9f26-b24a18788d9e"). InnerVolumeSpecName "kube-api-access-9xk6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.853455 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93985c21-680e-42a2-9f26-b24a18788d9e" (UID: "93985c21-680e-42a2-9f26-b24a18788d9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.890348 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.890383 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xk6h\" (UniqueName: \"kubernetes.io/projected/93985c21-680e-42a2-9f26-b24a18788d9e-kube-api-access-9xk6h\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:08 crc kubenswrapper[4736]: I0214 10:57:08.890399 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93985c21-680e-42a2-9f26-b24a18788d9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.361349 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerStarted","Data":"c2da0cefd27e2f3f47907c4d1fdc50f0c26aec80a3c7af72b815a90d9a92aebf"} Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.366206 4736 generic.go:334] "Generic (PLEG): container finished" podID="93985c21-680e-42a2-9f26-b24a18788d9e" containerID="76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67" exitCode=0 Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.366244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerDied","Data":"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67"} Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.366267 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffgmk" event={"ID":"93985c21-680e-42a2-9f26-b24a18788d9e","Type":"ContainerDied","Data":"2464917637d1b7b51029aa7fa1716253ca1473f4efce622ae6f868335ea18c7f"} Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.366283 4736 scope.go:117] "RemoveContainer" containerID="76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.366286 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffgmk" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.393194 4736 scope.go:117] "RemoveContainer" containerID="ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.398838 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.408347 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffgmk"] Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.414109 4736 scope.go:117] "RemoveContainer" containerID="c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.432118 4736 scope.go:117] "RemoveContainer" containerID="76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67" Feb 14 10:57:09 crc kubenswrapper[4736]: E0214 10:57:09.432651 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67\": container with ID starting with 76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67 not found: ID does not exist" containerID="76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.432695 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67"} err="failed to get container status \"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67\": rpc error: code = NotFound desc = could not find container \"76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67\": container with ID starting with 76ae9dd24b0957e16d44c334578b2354b3eaf770b846b56894947999345dde67 not found: ID does not exist" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.432720 4736 scope.go:117] "RemoveContainer" containerID="ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22" Feb 14 10:57:09 crc kubenswrapper[4736]: E0214 10:57:09.441312 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22\": container with ID starting with ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22 not found: ID does not exist" containerID="ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.441357 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22"} err="failed to get container status \"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22\": rpc error: code = NotFound desc = could not find container \"ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22\": container with ID starting with ff69c77e0692f19b7efbb29127d6708b5732d8f2ffd43649ecbc0df507b40c22 not found: ID does not exist" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.441386 4736 scope.go:117] "RemoveContainer" containerID="c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6" Feb 14 10:57:09 crc kubenswrapper[4736]: E0214 10:57:09.441833 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6\": container with ID starting with c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6 not found: ID does not exist" containerID="c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6" Feb 14 10:57:09 crc kubenswrapper[4736]: I0214 10:57:09.441854 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6"} err="failed to get container status \"c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6\": rpc error: code = NotFound desc = could not find container \"c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6\": container with ID starting with c8baabe892f3a07260e1d453b6d989666a79d15331cff4385d381dfef1d1b4d6 not found: ID does not exist" Feb 14 10:57:10 crc kubenswrapper[4736]: I0214 10:57:10.375407 4736 generic.go:334] "Generic (PLEG): container finished" podID="49d73488-f402-4310-89f4-99bdc6205893" containerID="c2da0cefd27e2f3f47907c4d1fdc50f0c26aec80a3c7af72b815a90d9a92aebf" exitCode=0 Feb 14 10:57:10 crc kubenswrapper[4736]: I0214 10:57:10.375539 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerDied","Data":"c2da0cefd27e2f3f47907c4d1fdc50f0c26aec80a3c7af72b815a90d9a92aebf"} Feb 14 10:57:10 crc kubenswrapper[4736]: I0214 10:57:10.406467 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" path="/var/lib/kubelet/pods/93985c21-680e-42a2-9f26-b24a18788d9e/volumes" Feb 14 10:57:11 crc kubenswrapper[4736]: I0214 10:57:11.390103 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerStarted","Data":"216e257ff0bf687d07fb452c2aff5fd957604f996b42b549cb155685726e9295"} Feb 14 10:57:11 crc kubenswrapper[4736]: I0214 10:57:11.421998 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-28ddb" podStartSLOduration=2.994536739 podStartE2EDuration="5.421977355s" podCreationTimestamp="2026-02-14 10:57:06 +0000 UTC" firstStartedPulling="2026-02-14 10:57:08.354360195 +0000 UTC m=+938.722987563" lastFinishedPulling="2026-02-14 10:57:10.781800811 +0000 UTC m=+941.150428179" observedRunningTime="2026-02-14 10:57:11.417406314 +0000 UTC m=+941.786033712" watchObservedRunningTime="2026-02-14 10:57:11.421977355 +0000 UTC m=+941.790604733" Feb 14 10:57:17 crc kubenswrapper[4736]: I0214 10:57:17.202898 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:17 crc kubenswrapper[4736]: I0214 10:57:17.203472 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:17 crc kubenswrapper[4736]: I0214 10:57:17.260160 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:17 crc kubenswrapper[4736]: I0214 10:57:17.556463 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:19 crc kubenswrapper[4736]: I0214 10:57:19.675639 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:19 crc kubenswrapper[4736]: I0214 10:57:19.676077 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-28ddb" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="registry-server" containerID="cri-o://216e257ff0bf687d07fb452c2aff5fd957604f996b42b549cb155685726e9295" gracePeriod=2 Feb 14 10:57:20 crc kubenswrapper[4736]: I0214 10:57:20.448556 4736 generic.go:334] "Generic (PLEG): container finished" podID="49d73488-f402-4310-89f4-99bdc6205893" containerID="216e257ff0bf687d07fb452c2aff5fd957604f996b42b549cb155685726e9295" exitCode=0 Feb 14 10:57:20 crc kubenswrapper[4736]: I0214 10:57:20.448764 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerDied","Data":"216e257ff0bf687d07fb452c2aff5fd957604f996b42b549cb155685726e9295"} Feb 14 10:57:20 crc kubenswrapper[4736]: I0214 10:57:20.894142 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.081335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities\") pod \"49d73488-f402-4310-89f4-99bdc6205893\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.081429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content\") pod \"49d73488-f402-4310-89f4-99bdc6205893\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.081461 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c99q\" (UniqueName: \"kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q\") pod \"49d73488-f402-4310-89f4-99bdc6205893\" (UID: \"49d73488-f402-4310-89f4-99bdc6205893\") " Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.082458 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities" (OuterVolumeSpecName: "utilities") pod "49d73488-f402-4310-89f4-99bdc6205893" (UID: "49d73488-f402-4310-89f4-99bdc6205893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.082600 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.089546 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q" (OuterVolumeSpecName: "kube-api-access-8c99q") pod "49d73488-f402-4310-89f4-99bdc6205893" (UID: "49d73488-f402-4310-89f4-99bdc6205893"). InnerVolumeSpecName "kube-api-access-8c99q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.183425 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c99q\" (UniqueName: \"kubernetes.io/projected/49d73488-f402-4310-89f4-99bdc6205893-kube-api-access-8c99q\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.311401 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49d73488-f402-4310-89f4-99bdc6205893" (UID: "49d73488-f402-4310-89f4-99bdc6205893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.385666 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d73488-f402-4310-89f4-99bdc6205893-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.459770 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28ddb" event={"ID":"49d73488-f402-4310-89f4-99bdc6205893","Type":"ContainerDied","Data":"9aaeb17e3dfb6a00f70be00e8e327e4d80768edc36e83255df07b2924440797f"} Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.459833 4736 scope.go:117] "RemoveContainer" containerID="216e257ff0bf687d07fb452c2aff5fd957604f996b42b549cb155685726e9295" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.459927 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28ddb" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.482654 4736 scope.go:117] "RemoveContainer" containerID="c2da0cefd27e2f3f47907c4d1fdc50f0c26aec80a3c7af72b815a90d9a92aebf" Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.516820 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.517969 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-28ddb"] Feb 14 10:57:21 crc kubenswrapper[4736]: I0214 10:57:21.527486 4736 scope.go:117] "RemoveContainer" containerID="c225ae7a7179fa0e8473ac85aad1d927bd8bee28f723a2527ca02cd80497a080" Feb 14 10:57:22 crc kubenswrapper[4736]: I0214 10:57:22.406701 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d73488-f402-4310-89f4-99bdc6205893" path="/var/lib/kubelet/pods/49d73488-f402-4310-89f4-99bdc6205893/volumes" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.254940 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.255991 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="extract-utilities" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256010 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="extract-utilities" Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.256022 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="extract-utilities" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256028 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="extract-utilities" Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.256047 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="extract-content" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256053 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="extract-content" Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.256059 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="extract-content" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256065 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="extract-content" Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.256075 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256080 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: E0214 10:57:27.256092 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256098 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256216 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d73488-f402-4310-89f4-99bdc6205893" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.256232 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="93985c21-680e-42a2-9f26-b24a18788d9e" containerName="registry-server" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.257035 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.323942 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.360204 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.360239 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.360451 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpkk\" (UniqueName: \"kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.462117 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.462323 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.462483 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpkk\" (UniqueName: \"kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.463449 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.463719 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.480895 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpkk\" (UniqueName: \"kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk\") pod \"redhat-marketplace-9mbh2\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:27 crc kubenswrapper[4736]: I0214 10:57:27.571378 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:28 crc kubenswrapper[4736]: I0214 10:57:28.023159 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:28 crc kubenswrapper[4736]: I0214 10:57:28.499320 4736 generic.go:334] "Generic (PLEG): container finished" podID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerID="6dc410755a71960abed2cfdaad28f98b91490a9051f8f111e61195be4249d6c6" exitCode=0 Feb 14 10:57:28 crc kubenswrapper[4736]: I0214 10:57:28.499368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerDied","Data":"6dc410755a71960abed2cfdaad28f98b91490a9051f8f111e61195be4249d6c6"} Feb 14 10:57:28 crc kubenswrapper[4736]: I0214 10:57:28.499398 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerStarted","Data":"85ca86f8d956611208e67980e42cf11b08d3996be27e0cee33f27a217a12c475"} Feb 14 10:57:29 crc kubenswrapper[4736]: I0214 10:57:29.507044 4736 generic.go:334] "Generic (PLEG): container finished" podID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerID="fefec53c895f3021ce5d8fe203aca3e847079351dcabf67c43dbbf24663b3692" exitCode=0 Feb 14 10:57:29 crc kubenswrapper[4736]: I0214 10:57:29.507262 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerDied","Data":"fefec53c895f3021ce5d8fe203aca3e847079351dcabf67c43dbbf24663b3692"} Feb 14 10:57:30 crc kubenswrapper[4736]: I0214 10:57:30.513982 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerStarted","Data":"c866058d25f9f3bbe1ea8f5e5f279c804d8f07a77d2feea4d8acf0d29fa7a176"} Feb 14 10:57:30 crc kubenswrapper[4736]: I0214 10:57:30.530664 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9mbh2" podStartSLOduration=2.148745174 podStartE2EDuration="3.530643551s" podCreationTimestamp="2026-02-14 10:57:27 +0000 UTC" firstStartedPulling="2026-02-14 10:57:28.500650753 +0000 UTC m=+958.869278121" lastFinishedPulling="2026-02-14 10:57:29.88254913 +0000 UTC m=+960.251176498" observedRunningTime="2026-02-14 10:57:30.529842548 +0000 UTC m=+960.898469966" watchObservedRunningTime="2026-02-14 10:57:30.530643551 +0000 UTC m=+960.899270919" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.572008 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.572453 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.614659 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.775293 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.776389 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.779221 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vhk72" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.779699 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.780476 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.786396 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5pqhs" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.797178 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.801196 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.854356 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.855070 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.862472 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-d5gww" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.876566 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.877507 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.879764 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-b52cp" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.891199 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6qd\" (UniqueName: \"kubernetes.io/projected/d6185be6-e012-411d-9b85-c971e12aebbd-kube-api-access-cd6qd\") pod \"cinder-operator-controller-manager-768c8b45bb-jbwwk\" (UID: \"d6185be6-e012-411d-9b85-c971e12aebbd\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.891258 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz624\" (UniqueName: \"kubernetes.io/projected/a1aa4225-909d-49ae-8ac7-d987a760f2d2-kube-api-access-qz624\") pod \"barbican-operator-controller-manager-c4b7d6946-vg9f9\" (UID: \"a1aa4225-909d-49ae-8ac7-d987a760f2d2\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.896533 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.904668 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.905800 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.909466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4hz5j" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.917518 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.918611 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.923960 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jq2zq" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.941119 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.968059 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.973061 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.982126 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lqlnk" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.982207 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.985839 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx"] Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.986564 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.990286 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7bngx" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.992485 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6qd\" (UniqueName: \"kubernetes.io/projected/d6185be6-e012-411d-9b85-c971e12aebbd-kube-api-access-cd6qd\") pod \"cinder-operator-controller-manager-768c8b45bb-jbwwk\" (UID: \"d6185be6-e012-411d-9b85-c971e12aebbd\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.992638 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz624\" (UniqueName: \"kubernetes.io/projected/a1aa4225-909d-49ae-8ac7-d987a760f2d2-kube-api-access-qz624\") pod \"barbican-operator-controller-manager-c4b7d6946-vg9f9\" (UID: \"a1aa4225-909d-49ae-8ac7-d987a760f2d2\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.992685 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55wm\" (UniqueName: \"kubernetes.io/projected/049efcc4-9d6e-47ff-8476-a29e06c6f362-kube-api-access-s55wm\") pod \"designate-operator-controller-manager-55cc45767f-ddq5f\" (UID: \"049efcc4-9d6e-47ff-8476-a29e06c6f362\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:57:37 crc kubenswrapper[4736]: I0214 10:57:37.992762 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dcpr\" (UniqueName: \"kubernetes.io/projected/8b8b4f4d-ca75-4127-bf64-3db5839a9ccb-kube-api-access-9dcpr\") pod \"glance-operator-controller-manager-68fd459cc4-lwpwl\" (UID: \"8b8b4f4d-ca75-4127-bf64-3db5839a9ccb\") " pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.012976 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.018680 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.047225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz624\" (UniqueName: \"kubernetes.io/projected/a1aa4225-909d-49ae-8ac7-d987a760f2d2-kube-api-access-qz624\") pod \"barbican-operator-controller-manager-c4b7d6946-vg9f9\" (UID: \"a1aa4225-909d-49ae-8ac7-d987a760f2d2\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.054950 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.055322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6qd\" (UniqueName: \"kubernetes.io/projected/d6185be6-e012-411d-9b85-c971e12aebbd-kube-api-access-cd6qd\") pod \"cinder-operator-controller-manager-768c8b45bb-jbwwk\" (UID: \"d6185be6-e012-411d-9b85-c971e12aebbd\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.088836 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.096496 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275zx\" (UniqueName: \"kubernetes.io/projected/434321f7-faee-40e8-8d52-6c863d100da6-kube-api-access-275zx\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.096769 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpcqd\" (UniqueName: \"kubernetes.io/projected/f0eae102-9c64-42bb-b7eb-64c54f3bf219-kube-api-access-fpcqd\") pod \"heat-operator-controller-manager-9595d6797-g7wc9\" (UID: \"f0eae102-9c64-42bb-b7eb-64c54f3bf219\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.096891 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55wm\" (UniqueName: \"kubernetes.io/projected/049efcc4-9d6e-47ff-8476-a29e06c6f362-kube-api-access-s55wm\") pod \"designate-operator-controller-manager-55cc45767f-ddq5f\" (UID: \"049efcc4-9d6e-47ff-8476-a29e06c6f362\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.096995 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpbw\" (UniqueName: \"kubernetes.io/projected/c9abe211-c0d9-4487-856f-12a41e4ad006-kube-api-access-rqpbw\") pod \"horizon-operator-controller-manager-54fb488b88-pcttg\" (UID: \"c9abe211-c0d9-4487-856f-12a41e4ad006\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.097105 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.097217 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgnc\" (UniqueName: \"kubernetes.io/projected/55648e35-636d-4321-bdfe-e7171a70e87d-kube-api-access-2vgnc\") pod \"ironic-operator-controller-manager-6494cdbf8f-mt8zx\" (UID: \"55648e35-636d-4321-bdfe-e7171a70e87d\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.097317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dcpr\" (UniqueName: \"kubernetes.io/projected/8b8b4f4d-ca75-4127-bf64-3db5839a9ccb-kube-api-access-9dcpr\") pod \"glance-operator-controller-manager-68fd459cc4-lwpwl\" (UID: \"8b8b4f4d-ca75-4127-bf64-3db5839a9ccb\") " pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.097914 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.098077 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.098808 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.107104 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zmghk" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.112440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.126807 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.127652 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.150177 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vhw9k" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.164453 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.164502 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.167483 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dcpr\" (UniqueName: \"kubernetes.io/projected/8b8b4f4d-ca75-4127-bf64-3db5839a9ccb-kube-api-access-9dcpr\") pod \"glance-operator-controller-manager-68fd459cc4-lwpwl\" (UID: \"8b8b4f4d-ca75-4127-bf64-3db5839a9ccb\") " pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.181554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.183329 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55wm\" (UniqueName: \"kubernetes.io/projected/049efcc4-9d6e-47ff-8476-a29e06c6f362-kube-api-access-s55wm\") pod \"designate-operator-controller-manager-55cc45767f-ddq5f\" (UID: \"049efcc4-9d6e-47ff-8476-a29e06c6f362\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.190827 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.191717 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.193306 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.193861 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tgpk6" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205091 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvh8\" (UniqueName: \"kubernetes.io/projected/c2104410-cd10-43d8-84d1-8cd837d65ed4-kube-api-access-nlvh8\") pod \"mariadb-operator-controller-manager-66997756f6-p2d5f\" (UID: \"c2104410-cd10-43d8-84d1-8cd837d65ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275zx\" (UniqueName: \"kubernetes.io/projected/434321f7-faee-40e8-8d52-6c863d100da6-kube-api-access-275zx\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpcqd\" (UniqueName: \"kubernetes.io/projected/f0eae102-9c64-42bb-b7eb-64c54f3bf219-kube-api-access-fpcqd\") pod \"heat-operator-controller-manager-9595d6797-g7wc9\" (UID: \"f0eae102-9c64-42bb-b7eb-64c54f3bf219\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpbw\" (UniqueName: \"kubernetes.io/projected/c9abe211-c0d9-4487-856f-12a41e4ad006-kube-api-access-rqpbw\") pod \"horizon-operator-controller-manager-54fb488b88-pcttg\" (UID: \"c9abe211-c0d9-4487-856f-12a41e4ad006\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205229 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqf7\" (UniqueName: \"kubernetes.io/projected/0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1-kube-api-access-rjqf7\") pod \"keystone-operator-controller-manager-6c78d668d5-7b9sw\" (UID: \"0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205250 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x9j\" (UniqueName: \"kubernetes.io/projected/07e92003-0bdf-4e0b-a35c-d8f96e3a57f8-kube-api-access-j9x9j\") pod \"manila-operator-controller-manager-76fd76856-dmpmv\" (UID: \"07e92003-0bdf-4e0b-a35c-d8f96e3a57f8\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.205299 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgnc\" (UniqueName: \"kubernetes.io/projected/55648e35-636d-4321-bdfe-e7171a70e87d-kube-api-access-2vgnc\") pod \"ironic-operator-controller-manager-6494cdbf8f-mt8zx\" (UID: \"55648e35-636d-4321-bdfe-e7171a70e87d\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.205949 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.205995 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert podName:434321f7-faee-40e8-8d52-6c863d100da6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:38.705981975 +0000 UTC m=+969.074609343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert") pod "infra-operator-controller-manager-66d6b5f488-6wjlq" (UID: "434321f7-faee-40e8-8d52-6c863d100da6") : secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.212293 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.213160 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.263322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpbw\" (UniqueName: \"kubernetes.io/projected/c9abe211-c0d9-4487-856f-12a41e4ad006-kube-api-access-rqpbw\") pod \"horizon-operator-controller-manager-54fb488b88-pcttg\" (UID: \"c9abe211-c0d9-4487-856f-12a41e4ad006\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.278447 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4j9vw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.279562 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.293056 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275zx\" (UniqueName: \"kubernetes.io/projected/434321f7-faee-40e8-8d52-6c863d100da6-kube-api-access-275zx\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.293761 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgnc\" (UniqueName: \"kubernetes.io/projected/55648e35-636d-4321-bdfe-e7171a70e87d-kube-api-access-2vgnc\") pod \"ironic-operator-controller-manager-6494cdbf8f-mt8zx\" (UID: \"55648e35-636d-4321-bdfe-e7171a70e87d\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.294627 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpcqd\" (UniqueName: \"kubernetes.io/projected/f0eae102-9c64-42bb-b7eb-64c54f3bf219-kube-api-access-fpcqd\") pod \"heat-operator-controller-manager-9595d6797-g7wc9\" (UID: \"f0eae102-9c64-42bb-b7eb-64c54f3bf219\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.340088 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9x9j\" (UniqueName: \"kubernetes.io/projected/07e92003-0bdf-4e0b-a35c-d8f96e3a57f8-kube-api-access-j9x9j\") pod \"manila-operator-controller-manager-76fd76856-dmpmv\" (UID: \"07e92003-0bdf-4e0b-a35c-d8f96e3a57f8\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.340574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvh8\" (UniqueName: \"kubernetes.io/projected/c2104410-cd10-43d8-84d1-8cd837d65ed4-kube-api-access-nlvh8\") pod \"mariadb-operator-controller-manager-66997756f6-p2d5f\" (UID: \"c2104410-cd10-43d8-84d1-8cd837d65ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.340753 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjqf7\" (UniqueName: \"kubernetes.io/projected/0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1-kube-api-access-rjqf7\") pod \"keystone-operator-controller-manager-6c78d668d5-7b9sw\" (UID: \"0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.341436 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.362904 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.383086 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvh8\" (UniqueName: \"kubernetes.io/projected/c2104410-cd10-43d8-84d1-8cd837d65ed4-kube-api-access-nlvh8\") pod \"mariadb-operator-controller-manager-66997756f6-p2d5f\" (UID: \"c2104410-cd10-43d8-84d1-8cd837d65ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.387384 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjqf7\" (UniqueName: \"kubernetes.io/projected/0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1-kube-api-access-rjqf7\") pod \"keystone-operator-controller-manager-6c78d668d5-7b9sw\" (UID: \"0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.401454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9x9j\" (UniqueName: \"kubernetes.io/projected/07e92003-0bdf-4e0b-a35c-d8f96e3a57f8-kube-api-access-j9x9j\") pod \"manila-operator-controller-manager-76fd76856-dmpmv\" (UID: \"07e92003-0bdf-4e0b-a35c-d8f96e3a57f8\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.446773 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.446824 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.447211 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gm2\" (UniqueName: \"kubernetes.io/projected/6a8d2df6-3e2b-4120-8848-9ab5ae903da5-kube-api-access-t9gm2\") pod \"neutron-operator-controller-manager-54967dbbdf-ptgcj\" (UID: \"6a8d2df6-3e2b-4120-8848-9ab5ae903da5\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.447492 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.459594 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rb88g" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.520827 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.528016 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.548360 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9gm2\" (UniqueName: \"kubernetes.io/projected/6a8d2df6-3e2b-4120-8848-9ab5ae903da5-kube-api-access-t9gm2\") pod \"neutron-operator-controller-manager-54967dbbdf-ptgcj\" (UID: \"6a8d2df6-3e2b-4120-8848-9ab5ae903da5\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.548755 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.549191 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.550099 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.559569 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.566673 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fknnp" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.576834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.585639 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.606480 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9gm2\" (UniqueName: \"kubernetes.io/projected/6a8d2df6-3e2b-4120-8848-9ab5ae903da5-kube-api-access-t9gm2\") pod \"neutron-operator-controller-manager-54967dbbdf-ptgcj\" (UID: \"6a8d2df6-3e2b-4120-8848-9ab5ae903da5\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.606564 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.607375 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.612362 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-v77rx" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.613308 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.614306 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.620350 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.620620 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lpclp" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.642155 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.659613 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh48s\" (UniqueName: \"kubernetes.io/projected/4b0b03c4-b031-408b-a6de-8b3af1064ebd-kube-api-access-sh48s\") pod \"octavia-operator-controller-manager-745bbbd77b-cztvd\" (UID: \"4b0b03c4-b031-408b-a6de-8b3af1064ebd\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.659911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwcm\" (UniqueName: \"kubernetes.io/projected/fc679b24-ad26-46c8-8d9e-28ef80a48090-kube-api-access-ztwcm\") pod \"nova-operator-controller-manager-5ddd85db87-spx2d\" (UID: \"fc679b24-ad26-46c8-8d9e-28ef80a48090\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.678493 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.708771 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.709642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.712448 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8w9pj" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.716267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.777831 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792379 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9t4k\" (UniqueName: \"kubernetes.io/projected/4800ac63-235a-4486-a61b-018e85369028-kube-api-access-h9t4k\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5m47\" (UniqueName: \"kubernetes.io/projected/bd3596d4-d10d-45e0-b236-d0cca28bc09b-kube-api-access-k5m47\") pod \"placement-operator-controller-manager-57bd55f9b7-pqhqz\" (UID: \"bd3596d4-d10d-45e0-b236-d0cca28bc09b\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792479 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwcm\" (UniqueName: \"kubernetes.io/projected/fc679b24-ad26-46c8-8d9e-28ef80a48090-kube-api-access-ztwcm\") pod \"nova-operator-controller-manager-5ddd85db87-spx2d\" (UID: \"fc679b24-ad26-46c8-8d9e-28ef80a48090\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqkr\" (UniqueName: \"kubernetes.io/projected/05c7d113-70d7-4bbf-9c0e-4981d602acd3-kube-api-access-ptqkr\") pod \"ovn-operator-controller-manager-85c99d655-9zxzs\" (UID: \"05c7d113-70d7-4bbf-9c0e-4981d602acd3\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792610 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.792670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh48s\" (UniqueName: \"kubernetes.io/projected/4b0b03c4-b031-408b-a6de-8b3af1064ebd-kube-api-access-sh48s\") pod \"octavia-operator-controller-manager-745bbbd77b-cztvd\" (UID: \"4b0b03c4-b031-408b-a6de-8b3af1064ebd\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.793869 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.793915 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert podName:434321f7-faee-40e8-8d52-6c863d100da6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:39.793899119 +0000 UTC m=+970.162526487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert") pod "infra-operator-controller-manager-66d6b5f488-6wjlq" (UID: "434321f7-faee-40e8-8d52-6c863d100da6") : secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.800574 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.801451 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.804931 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ngsjq" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.824483 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh48s\" (UniqueName: \"kubernetes.io/projected/4b0b03c4-b031-408b-a6de-8b3af1064ebd-kube-api-access-sh48s\") pod \"octavia-operator-controller-manager-745bbbd77b-cztvd\" (UID: \"4b0b03c4-b031-408b-a6de-8b3af1064ebd\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.828027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwcm\" (UniqueName: \"kubernetes.io/projected/fc679b24-ad26-46c8-8d9e-28ef80a48090-kube-api-access-ztwcm\") pod \"nova-operator-controller-manager-5ddd85db87-spx2d\" (UID: \"fc679b24-ad26-46c8-8d9e-28ef80a48090\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.836867 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.838295 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.842510 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gft9n" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.883150 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.884156 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.892912 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.894154 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.897944 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.898488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9t4k\" (UniqueName: \"kubernetes.io/projected/4800ac63-235a-4486-a61b-018e85369028-kube-api-access-h9t4k\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.902207 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5m47\" (UniqueName: \"kubernetes.io/projected/bd3596d4-d10d-45e0-b236-d0cca28bc09b-kube-api-access-k5m47\") pod \"placement-operator-controller-manager-57bd55f9b7-pqhqz\" (UID: \"bd3596d4-d10d-45e0-b236-d0cca28bc09b\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.902376 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqkr\" (UniqueName: \"kubernetes.io/projected/05c7d113-70d7-4bbf-9c0e-4981d602acd3-kube-api-access-ptqkr\") pod \"ovn-operator-controller-manager-85c99d655-9zxzs\" (UID: \"05c7d113-70d7-4bbf-9c0e-4981d602acd3\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.902595 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.902836 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: E0214 10:57:38.902962 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert podName:4800ac63-235a-4486-a61b-018e85369028 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:39.402941571 +0000 UTC m=+969.771568939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" (UID: "4800ac63-235a-4486-a61b-018e85369028") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.915428 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8r2q7" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.915602 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-48ft6" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.916034 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.916058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.924172 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.948427 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5m47\" (UniqueName: \"kubernetes.io/projected/bd3596d4-d10d-45e0-b236-d0cca28bc09b-kube-api-access-k5m47\") pod \"placement-operator-controller-manager-57bd55f9b7-pqhqz\" (UID: \"bd3596d4-d10d-45e0-b236-d0cca28bc09b\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.957588 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b"] Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.967846 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.970327 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqkr\" (UniqueName: \"kubernetes.io/projected/05c7d113-70d7-4bbf-9c0e-4981d602acd3-kube-api-access-ptqkr\") pod \"ovn-operator-controller-manager-85c99d655-9zxzs\" (UID: \"05c7d113-70d7-4bbf-9c0e-4981d602acd3\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:57:38 crc kubenswrapper[4736]: I0214 10:57:38.975718 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9t4k\" (UniqueName: \"kubernetes.io/projected/4800ac63-235a-4486-a61b-018e85369028-kube-api-access-h9t4k\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.006227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrj2\" (UniqueName: \"kubernetes.io/projected/692863b5-b658-4d50-928e-b5357a279851-kube-api-access-mxrj2\") pod \"test-operator-controller-manager-8467ccb4c8-qr776\" (UID: \"692863b5-b658-4d50-928e-b5357a279851\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.006571 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxqz\" (UniqueName: \"kubernetes.io/projected/13ed197e-630c-4788-863e-23be47efe228-kube-api-access-sfxqz\") pod \"watcher-operator-controller-manager-6c469bc6bb-2p58b\" (UID: \"13ed197e-630c-4788-863e-23be47efe228\") " pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.006601 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2qp\" (UniqueName: \"kubernetes.io/projected/332bd6ec-7fc0-4c92-bd0e-491f238a8680-kube-api-access-ms2qp\") pod \"telemetry-operator-controller-manager-56dc67d744-h52ld\" (UID: \"332bd6ec-7fc0-4c92-bd0e-491f238a8680\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.006619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhsfv\" (UniqueName: \"kubernetes.io/projected/391b46e9-4f14-4b12-9c9a-800eecfc51af-kube-api-access-dhsfv\") pod \"swift-operator-controller-manager-79558bbfbf-9dgfx\" (UID: \"391b46e9-4f14-4b12-9c9a-800eecfc51af\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.050574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.112774 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxqz\" (UniqueName: \"kubernetes.io/projected/13ed197e-630c-4788-863e-23be47efe228-kube-api-access-sfxqz\") pod \"watcher-operator-controller-manager-6c469bc6bb-2p58b\" (UID: \"13ed197e-630c-4788-863e-23be47efe228\") " pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.112822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2qp\" (UniqueName: \"kubernetes.io/projected/332bd6ec-7fc0-4c92-bd0e-491f238a8680-kube-api-access-ms2qp\") pod \"telemetry-operator-controller-manager-56dc67d744-h52ld\" (UID: \"332bd6ec-7fc0-4c92-bd0e-491f238a8680\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.112847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhsfv\" (UniqueName: \"kubernetes.io/projected/391b46e9-4f14-4b12-9c9a-800eecfc51af-kube-api-access-dhsfv\") pod \"swift-operator-controller-manager-79558bbfbf-9dgfx\" (UID: \"391b46e9-4f14-4b12-9c9a-800eecfc51af\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.112936 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrj2\" (UniqueName: \"kubernetes.io/projected/692863b5-b658-4d50-928e-b5357a279851-kube-api-access-mxrj2\") pod \"test-operator-controller-manager-8467ccb4c8-qr776\" (UID: \"692863b5-b658-4d50-928e-b5357a279851\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.140415 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.142618 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.145022 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.178304 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhsfv\" (UniqueName: \"kubernetes.io/projected/391b46e9-4f14-4b12-9c9a-800eecfc51af-kube-api-access-dhsfv\") pod \"swift-operator-controller-manager-79558bbfbf-9dgfx\" (UID: \"391b46e9-4f14-4b12-9c9a-800eecfc51af\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.180771 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.181728 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.181015 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cvmkk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.186786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2qp\" (UniqueName: \"kubernetes.io/projected/332bd6ec-7fc0-4c92-bd0e-491f238a8680-kube-api-access-ms2qp\") pod \"telemetry-operator-controller-manager-56dc67d744-h52ld\" (UID: \"332bd6ec-7fc0-4c92-bd0e-491f238a8680\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.180168 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrj2\" (UniqueName: \"kubernetes.io/projected/692863b5-b658-4d50-928e-b5357a279851-kube-api-access-mxrj2\") pod \"test-operator-controller-manager-8467ccb4c8-qr776\" (UID: \"692863b5-b658-4d50-928e-b5357a279851\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.195768 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.228199 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxqz\" (UniqueName: \"kubernetes.io/projected/13ed197e-630c-4788-863e-23be47efe228-kube-api-access-sfxqz\") pod \"watcher-operator-controller-manager-6c469bc6bb-2p58b\" (UID: \"13ed197e-630c-4788-863e-23be47efe228\") " pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.233920 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.233989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbr4b\" (UniqueName: \"kubernetes.io/projected/18979fdb-9863-4a61-a6cc-5984b041d7c6-kube-api-access-vbr4b\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.234028 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.243921 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.249398 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.272238 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.273312 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.281252 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.281567 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.283119 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jpwdg" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.305836 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.336412 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.336458 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbr4b\" (UniqueName: \"kubernetes.io/projected/18979fdb-9863-4a61-a6cc-5984b041d7c6-kube-api-access-vbr4b\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.336498 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pkd\" (UniqueName: \"kubernetes.io/projected/dd5e2ee2-c48c-40fd-9a02-ce871056600f-kube-api-access-56pkd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4x6xz\" (UID: \"dd5e2ee2-c48c-40fd-9a02-ce871056600f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.336522 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.336658 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.336704 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:39.836687673 +0000 UTC m=+970.205315041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "metrics-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.336969 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.337000 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:39.836993252 +0000 UTC m=+970.205620620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.339648 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.404567 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbr4b\" (UniqueName: \"kubernetes.io/projected/18979fdb-9863-4a61-a6cc-5984b041d7c6-kube-api-access-vbr4b\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.433169 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.433316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.441093 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pkd\" (UniqueName: \"kubernetes.io/projected/dd5e2ee2-c48c-40fd-9a02-ce871056600f-kube-api-access-56pkd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4x6xz\" (UID: \"dd5e2ee2-c48c-40fd-9a02-ce871056600f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.441194 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.442210 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.442246 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert podName:4800ac63-235a-4486-a61b-018e85369028 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:40.442233906 +0000 UTC m=+970.810861274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" (UID: "4800ac63-235a-4486-a61b-018e85369028") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.477591 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.477954 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pkd\" (UniqueName: \"kubernetes.io/projected/dd5e2ee2-c48c-40fd-9a02-ce871056600f-kube-api-access-56pkd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4x6xz\" (UID: \"dd5e2ee2-c48c-40fd-9a02-ce871056600f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.538389 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6185be6_e012_411d_9b85_c971e12aebbd.slice/crio-6313ef8b506cb8916eacd098d2eca06c345b1701ec0e6ade296997163d9d621f WatchSource:0}: Error finding container 6313ef8b506cb8916eacd098d2eca06c345b1701ec0e6ade296997163d9d621f: Status 404 returned error can't find the container with id 6313ef8b506cb8916eacd098d2eca06c345b1701ec0e6ade296997163d9d621f Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.588814 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.602677 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg"] Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.631067 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.649580 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" event={"ID":"d6185be6-e012-411d-9b85-c971e12aebbd","Type":"ContainerStarted","Data":"6313ef8b506cb8916eacd098d2eca06c345b1701ec0e6ade296997163d9d621f"} Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.653574 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9abe211_c0d9_4487_856f_12a41e4ad006.slice/crio-0bc0b5c439581134e65a5e12f5400b8839e819749894143151d66ce4092f8b88 WatchSource:0}: Error finding container 0bc0b5c439581134e65a5e12f5400b8839e819749894143151d66ce4092f8b88: Status 404 returned error can't find the container with id 0bc0b5c439581134e65a5e12f5400b8839e819749894143151d66ce4092f8b88 Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.653877 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" event={"ID":"a1aa4225-909d-49ae-8ac7-d987a760f2d2","Type":"ContainerStarted","Data":"af1d1fd015d37fefc2b6d81266c1d3d1b0e068b7764f3260cad7905bdd720d80"} Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.654022 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049efcc4_9d6e_47ff_8476_a29e06c6f362.slice/crio-c40e4148ff90828d1f9edf18bff2a2a16ffecc0da2216379de5d213b73f9b78a WatchSource:0}: Error finding container c40e4148ff90828d1f9edf18bff2a2a16ffecc0da2216379de5d213b73f9b78a: Status 404 returned error can't find the container with id c40e4148ff90828d1f9edf18bff2a2a16ffecc0da2216379de5d213b73f9b78a Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.657567 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl"] Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.669417 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8b4f4d_ca75_4127_bf64_3db5839a9ccb.slice/crio-0b7821ce44da2d81786f14e322c4c3a29896f293386de54cd591f491e31ef219 WatchSource:0}: Error finding container 0b7821ce44da2d81786f14e322c4c3a29896f293386de54cd591f491e31ef219: Status 404 returned error can't find the container with id 0b7821ce44da2d81786f14e322c4c3a29896f293386de54cd591f491e31ef219 Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.852048 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.852373 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.852417 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.852564 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.852617 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:40.852599411 +0000 UTC m=+971.221226779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "metrics-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.852942 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.852968 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert podName:434321f7-faee-40e8-8d52-6c863d100da6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:41.852960101 +0000 UTC m=+972.221587459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert") pod "infra-operator-controller-manager-66d6b5f488-6wjlq" (UID: "434321f7-faee-40e8-8d52-6c863d100da6") : secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.853003 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: E0214 10:57:39.853020 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:40.853014723 +0000 UTC m=+971.221642091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.933066 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9"] Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.938435 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0eae102_9c64_42bb_b7eb_64c54f3bf219.slice/crio-9bcb4e45b33a26a82ac00e9adbd1ad90c02b505578226780fe51bdddde642b3d WatchSource:0}: Error finding container 9bcb4e45b33a26a82ac00e9adbd1ad90c02b505578226780fe51bdddde642b3d: Status 404 returned error can't find the container with id 9bcb4e45b33a26a82ac00e9adbd1ad90c02b505578226780fe51bdddde642b3d Feb 14 10:57:39 crc kubenswrapper[4736]: I0214 10:57:39.940967 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx"] Feb 14 10:57:39 crc kubenswrapper[4736]: W0214 10:57:39.956890 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55648e35_636d_4321_bdfe_e7171a70e87d.slice/crio-23a17e4eb883513930950c3d3829539bab7bf959b5b17af4cea9e49b3ec44c90 WatchSource:0}: Error finding container 23a17e4eb883513930950c3d3829539bab7bf959b5b17af4cea9e49b3ec44c90: Status 404 returned error can't find the container with id 23a17e4eb883513930950c3d3829539bab7bf959b5b17af4cea9e49b3ec44c90 Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.151963 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd"] Feb 14 10:57:40 crc kubenswrapper[4736]: W0214 10:57:40.178591 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b0b03c4_b031_408b_a6de_8b3af1064ebd.slice/crio-1cb024f5a1d797edfe7de1db9a4f25b66c9ff59b253bbdcdf4fe50b6d39d1747 WatchSource:0}: Error finding container 1cb024f5a1d797edfe7de1db9a4f25b66c9ff59b253bbdcdf4fe50b6d39d1747: Status 404 returned error can't find the container with id 1cb024f5a1d797edfe7de1db9a4f25b66c9ff59b253bbdcdf4fe50b6d39d1747 Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.180417 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.197350 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.218385 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.330324 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f"] Feb 14 10:57:40 crc kubenswrapper[4736]: W0214 10:57:40.346855 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2104410_cd10_43d8_84d1_8cd837d65ed4.slice/crio-c546161842abd78d83aa6053b6d94d0e026bbb057659776c2880080928746675 WatchSource:0}: Error finding container c546161842abd78d83aa6053b6d94d0e026bbb057659776c2880080928746675: Status 404 returned error can't find the container with id c546161842abd78d83aa6053b6d94d0e026bbb057659776c2880080928746675 Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.372391 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs"] Feb 14 10:57:40 crc kubenswrapper[4736]: W0214 10:57:40.382280 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod692863b5_b658_4d50_928e_b5357a279851.slice/crio-97f22fcd9b23ad8ef2579dbb3c9bdbb798093d1d4b30e1ad9a1071add2c8bf76 WatchSource:0}: Error finding container 97f22fcd9b23ad8ef2579dbb3c9bdbb798093d1d4b30e1ad9a1071add2c8bf76: Status 404 returned error can't find the container with id 97f22fcd9b23ad8ef2579dbb3c9bdbb798093d1d4b30e1ad9a1071add2c8bf76 Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.385498 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776"] Feb 14 10:57:40 crc kubenswrapper[4736]: W0214 10:57:40.432442 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc679b24_ad26_46c8_8d9e_28ef80a48090.slice/crio-f17fbc30c14c0b2adc3b70d8ffa38ff8a6af833b9dd2e365e855fa900f4d1b7f WatchSource:0}: Error finding container f17fbc30c14c0b2adc3b70d8ffa38ff8a6af833b9dd2e365e855fa900f4d1b7f: Status 404 returned error can't find the container with id f17fbc30c14c0b2adc3b70d8ffa38ff8a6af833b9dd2e365e855fa900f4d1b7f Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.432849 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d"] Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.434194 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztwcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-spx2d_openstack-operators(fc679b24-ad26-46c8-8d9e-28ef80a48090): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.436055 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podUID="fc679b24-ad26-46c8-8d9e-28ef80a48090" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.470907 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.472047 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.472159 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert podName:4800ac63-235a-4486-a61b-018e85369028 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:42.472141566 +0000 UTC m=+972.840768934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" (UID: "4800ac63-235a-4486-a61b-018e85369028") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.645500 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5m47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-pqhqz_openstack-operators(bd3596d4-d10d-45e0-b236-d0cca28bc09b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.646811 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" podUID="bd3596d4-d10d-45e0-b236-d0cca28bc09b" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.651968 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.672260 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.673441 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.678817 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" event={"ID":"049efcc4-9d6e-47ff-8476-a29e06c6f362","Type":"ContainerStarted","Data":"c40e4148ff90828d1f9edf18bff2a2a16ffecc0da2216379de5d213b73f9b78a"} Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.678999 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms2qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-h52ld_openstack-operators(332bd6ec-7fc0-4c92-bd0e-491f238a8680): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.679092 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhsfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-79558bbfbf-9dgfx_openstack-operators(391b46e9-4f14-4b12-9c9a-800eecfc51af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.681388 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" podUID="391b46e9-4f14-4b12-9c9a-800eecfc51af" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.681476 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podUID="332bd6ec-7fc0-4c92-bd0e-491f238a8680" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.681516 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.690140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" event={"ID":"4b0b03c4-b031-408b-a6de-8b3af1064ebd","Type":"ContainerStarted","Data":"1cb024f5a1d797edfe7de1db9a4f25b66c9ff59b253bbdcdf4fe50b6d39d1747"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.691692 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" event={"ID":"6a8d2df6-3e2b-4120-8848-9ab5ae903da5","Type":"ContainerStarted","Data":"8c97531cf0bea9e1029d8e8834dac10acfba93241db46012748be1b6734a6fbe"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.702785 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" event={"ID":"07e92003-0bdf-4e0b-a35c-d8f96e3a57f8","Type":"ContainerStarted","Data":"74379f537cc81fb610c22adbcb4bd6fe60adaa4c10f95437ff6b9bd25aa064ef"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.715381 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" event={"ID":"0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1","Type":"ContainerStarted","Data":"b9a7a0c898c69cb76b75b3d9a9947b34fdeb18d8cb8ca8fd51ab96a0e45b67c7"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.724479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" event={"ID":"13ed197e-630c-4788-863e-23be47efe228","Type":"ContainerStarted","Data":"f0376a95d3a00b8c6df0abb8e97b62f0b71aa5febe6cd09ee9635f537cdd445c"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.727797 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" event={"ID":"692863b5-b658-4d50-928e-b5357a279851","Type":"ContainerStarted","Data":"97f22fcd9b23ad8ef2579dbb3c9bdbb798093d1d4b30e1ad9a1071add2c8bf76"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.737904 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" event={"ID":"bd3596d4-d10d-45e0-b236-d0cca28bc09b","Type":"ContainerStarted","Data":"3925369691fe5a6b4d99daac4d87aa1b57b08553bf0a749354083f428a7bc1c2"} Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.739474 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" podUID="bd3596d4-d10d-45e0-b236-d0cca28bc09b" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.747651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" event={"ID":"05c7d113-70d7-4bbf-9c0e-4981d602acd3","Type":"ContainerStarted","Data":"0e63ef473a3c915497048f29cb736cf5ac3894c2c5585d605d21334d2dd86a02"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.753522 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" event={"ID":"8b8b4f4d-ca75-4127-bf64-3db5839a9ccb","Type":"ContainerStarted","Data":"0b7821ce44da2d81786f14e322c4c3a29896f293386de54cd591f491e31ef219"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.759494 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" event={"ID":"f0eae102-9c64-42bb-b7eb-64c54f3bf219","Type":"ContainerStarted","Data":"9bcb4e45b33a26a82ac00e9adbd1ad90c02b505578226780fe51bdddde642b3d"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.792306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" event={"ID":"c9abe211-c0d9-4487-856f-12a41e4ad006","Type":"ContainerStarted","Data":"0bc0b5c439581134e65a5e12f5400b8839e819749894143151d66ce4092f8b88"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.802633 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" event={"ID":"55648e35-636d-4321-bdfe-e7171a70e87d","Type":"ContainerStarted","Data":"23a17e4eb883513930950c3d3829539bab7bf959b5b17af4cea9e49b3ec44c90"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.803535 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" event={"ID":"c2104410-cd10-43d8-84d1-8cd837d65ed4","Type":"ContainerStarted","Data":"c546161842abd78d83aa6053b6d94d0e026bbb057659776c2880080928746675"} Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.817782 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz"] Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.823882 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9mbh2" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="registry-server" containerID="cri-o://c866058d25f9f3bbe1ea8f5e5f279c804d8f07a77d2feea4d8acf0d29fa7a176" gracePeriod=2 Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.824564 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" event={"ID":"fc679b24-ad26-46c8-8d9e-28ef80a48090","Type":"ContainerStarted","Data":"f17fbc30c14c0b2adc3b70d8ffa38ff8a6af833b9dd2e365e855fa900f4d1b7f"} Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.842709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podUID="fc679b24-ad26-46c8-8d9e-28ef80a48090" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.889448 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:40 crc kubenswrapper[4736]: I0214 10:57:40.889539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.889721 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.895119 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:42.89509133 +0000 UTC m=+973.263718698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "metrics-server-cert" not found Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.895625 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:40 crc kubenswrapper[4736]: E0214 10:57:40.895664 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:42.895653526 +0000 UTC m=+973.264280974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.865652 4736 generic.go:334] "Generic (PLEG): container finished" podID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerID="c866058d25f9f3bbe1ea8f5e5f279c804d8f07a77d2feea4d8acf0d29fa7a176" exitCode=0 Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.865791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerDied","Data":"c866058d25f9f3bbe1ea8f5e5f279c804d8f07a77d2feea4d8acf0d29fa7a176"} Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.869933 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" event={"ID":"dd5e2ee2-c48c-40fd-9a02-ce871056600f","Type":"ContainerStarted","Data":"6b5c62769a30ffc3438be771233acbb5eecf52ed0040902a9ab5a0c2ccf5f3f6"} Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.873856 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" event={"ID":"391b46e9-4f14-4b12-9c9a-800eecfc51af","Type":"ContainerStarted","Data":"673ca616baf3d3a1282f273b3ec855b64090440fe0e31796ae7c4803d8384683"} Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.882423 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" podUID="391b46e9-4f14-4b12-9c9a-800eecfc51af" Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.884052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" event={"ID":"332bd6ec-7fc0-4c92-bd0e-491f238a8680","Type":"ContainerStarted","Data":"63d5a5abd164de06613d2b6c45cce867f2cb26c6d08f1466c3a62267b100e3fc"} Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.891730 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podUID="332bd6ec-7fc0-4c92-bd0e-491f238a8680" Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.894614 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podUID="fc679b24-ad26-46c8-8d9e-28ef80a48090" Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.901839 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" podUID="bd3596d4-d10d-45e0-b236-d0cca28bc09b" Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.908459 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.908626 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:41 crc kubenswrapper[4736]: E0214 10:57:41.908683 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert podName:434321f7-faee-40e8-8d52-6c863d100da6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:45.908665994 +0000 UTC m=+976.277293412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert") pod "infra-operator-controller-manager-66d6b5f488-6wjlq" (UID: "434321f7-faee-40e8-8d52-6c863d100da6") : secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:41 crc kubenswrapper[4736]: I0214 10:57:41.920685 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.009557 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content\") pod \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.009691 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwpkk\" (UniqueName: \"kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk\") pod \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.010800 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities" (OuterVolumeSpecName: "utilities") pod "701ac0f0-8351-4ae8-b5cf-e4be16f58a64" (UID: "701ac0f0-8351-4ae8-b5cf-e4be16f58a64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.011311 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities\") pod \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\" (UID: \"701ac0f0-8351-4ae8-b5cf-e4be16f58a64\") " Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.011813 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.030878 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk" (OuterVolumeSpecName: "kube-api-access-fwpkk") pod "701ac0f0-8351-4ae8-b5cf-e4be16f58a64" (UID: "701ac0f0-8351-4ae8-b5cf-e4be16f58a64"). InnerVolumeSpecName "kube-api-access-fwpkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.044565 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "701ac0f0-8351-4ae8-b5cf-e4be16f58a64" (UID: "701ac0f0-8351-4ae8-b5cf-e4be16f58a64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.113446 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwpkk\" (UniqueName: \"kubernetes.io/projected/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-kube-api-access-fwpkk\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.113482 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701ac0f0-8351-4ae8-b5cf-e4be16f58a64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.519425 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.519624 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.519723 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert podName:4800ac63-235a-4486-a61b-018e85369028 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:46.519698413 +0000 UTC m=+976.888325861 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" (UID: "4800ac63-235a-4486-a61b-018e85369028") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.928620 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.928678 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.928808 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.928847 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:46.928833391 +0000 UTC m=+977.297460759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "metrics-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.929146 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.929236 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:46.929211992 +0000 UTC m=+977.297839500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.931845 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbh2" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.932206 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbh2" event={"ID":"701ac0f0-8351-4ae8-b5cf-e4be16f58a64","Type":"ContainerDied","Data":"85ca86f8d956611208e67980e42cf11b08d3996be27e0cee33f27a217a12c475"} Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.932238 4736 scope.go:117] "RemoveContainer" containerID="c866058d25f9f3bbe1ea8f5e5f279c804d8f07a77d2feea4d8acf0d29fa7a176" Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.935616 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" podUID="391b46e9-4f14-4b12-9c9a-800eecfc51af" Feb 14 10:57:42 crc kubenswrapper[4736]: E0214 10:57:42.936392 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podUID="332bd6ec-7fc0-4c92-bd0e-491f238a8680" Feb 14 10:57:42 crc kubenswrapper[4736]: I0214 10:57:42.996161 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:43 crc kubenswrapper[4736]: I0214 10:57:43.007802 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbh2"] Feb 14 10:57:43 crc kubenswrapper[4736]: I0214 10:57:43.013364 4736 scope.go:117] "RemoveContainer" containerID="fefec53c895f3021ce5d8fe203aca3e847079351dcabf67c43dbbf24663b3692" Feb 14 10:57:43 crc kubenswrapper[4736]: I0214 10:57:43.101292 4736 scope.go:117] "RemoveContainer" containerID="6dc410755a71960abed2cfdaad28f98b91490a9051f8f111e61195be4249d6c6" Feb 14 10:57:44 crc kubenswrapper[4736]: I0214 10:57:44.408090 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" path="/var/lib/kubelet/pods/701ac0f0-8351-4ae8-b5cf-e4be16f58a64/volumes" Feb 14 10:57:45 crc kubenswrapper[4736]: I0214 10:57:45.991488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:45 crc kubenswrapper[4736]: E0214 10:57:45.991687 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:45 crc kubenswrapper[4736]: E0214 10:57:45.991772 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert podName:434321f7-faee-40e8-8d52-6c863d100da6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:53.991753889 +0000 UTC m=+984.360381247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert") pod "infra-operator-controller-manager-66d6b5f488-6wjlq" (UID: "434321f7-faee-40e8-8d52-6c863d100da6") : secret "infra-operator-webhook-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: I0214 10:57:46.526592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.526779 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.526833 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert podName:4800ac63-235a-4486-a61b-018e85369028 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:54.52681459 +0000 UTC m=+984.895441958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" (UID: "4800ac63-235a-4486-a61b-018e85369028") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: I0214 10:57:46.932460 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:46 crc kubenswrapper[4736]: I0214 10:57:46.932527 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.932645 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.932688 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:54.932675104 +0000 UTC m=+985.301302472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "metrics-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.933027 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:46 crc kubenswrapper[4736]: E0214 10:57:46.933051 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:57:54.933043685 +0000 UTC m=+985.301671053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:47 crc kubenswrapper[4736]: I0214 10:57:47.696308 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:57:47 crc kubenswrapper[4736]: I0214 10:57:47.696401 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:57:53 crc kubenswrapper[4736]: E0214 10:57:53.783776 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:266738cd385e14296b89c37eeff8b15ee431689f42ac5b1755d31bb7d5d178d3" Feb 14 10:57:53 crc kubenswrapper[4736]: E0214 10:57:53.784425 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:266738cd385e14296b89c37eeff8b15ee431689f42ac5b1755d31bb7d5d178d3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dcpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68fd459cc4-lwpwl_openstack-operators(8b8b4f4d-ca75-4127-bf64-3db5839a9ccb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:57:53 crc kubenswrapper[4736]: E0214 10:57:53.786600 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" podUID="8b8b4f4d-ca75-4127-bf64-3db5839a9ccb" Feb 14 10:57:54 crc kubenswrapper[4736]: E0214 10:57:54.022431 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:266738cd385e14296b89c37eeff8b15ee431689f42ac5b1755d31bb7d5d178d3\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" podUID="8b8b4f4d-ca75-4127-bf64-3db5839a9ccb" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.028585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.036442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/434321f7-faee-40e8-8d52-6c863d100da6-cert\") pod \"infra-operator-controller-manager-66d6b5f488-6wjlq\" (UID: \"434321f7-faee-40e8-8d52-6c863d100da6\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.199765 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.535557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.545667 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4800ac63-235a-4486-a61b-018e85369028-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt\" (UID: \"4800ac63-235a-4486-a61b-018e85369028\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.577414 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.941555 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.941633 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:54 crc kubenswrapper[4736]: E0214 10:57:54.942000 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 10:57:54 crc kubenswrapper[4736]: E0214 10:57:54.942067 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs podName:18979fdb-9863-4a61-a6cc-5984b041d7c6 nodeName:}" failed. No retries permitted until 2026-02-14 10:58:10.942045048 +0000 UTC m=+1001.310672416 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs") pod "openstack-operator-controller-manager-7f46fb7bd6-whwbk" (UID: "18979fdb-9863-4a61-a6cc-5984b041d7c6") : secret "webhook-server-cert" not found Feb 14 10:57:54 crc kubenswrapper[4736]: I0214 10:57:54.947534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-metrics-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:57:59 crc kubenswrapper[4736]: E0214 10:57:59.525236 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 14 10:57:59 crc kubenswrapper[4736]: E0214 10:57:59.525974 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjqf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-7b9sw_openstack-operators(0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:57:59 crc kubenswrapper[4736]: E0214 10:57:59.527161 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" podUID="0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1" Feb 14 10:58:00 crc kubenswrapper[4736]: E0214 10:58:00.020431 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 14 10:58:00 crc kubenswrapper[4736]: E0214 10:58:00.021057 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9gm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-ptgcj_openstack-operators(6a8d2df6-3e2b-4120-8848-9ab5ae903da5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:00 crc kubenswrapper[4736]: E0214 10:58:00.022221 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" podUID="6a8d2df6-3e2b-4120-8848-9ab5ae903da5" Feb 14 10:58:00 crc kubenswrapper[4736]: E0214 10:58:00.070709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" podUID="0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1" Feb 14 10:58:00 crc kubenswrapper[4736]: E0214 10:58:00.070709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" podUID="6a8d2df6-3e2b-4120-8848-9ab5ae903da5" Feb 14 10:58:01 crc kubenswrapper[4736]: E0214 10:58:01.766362 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:88d2227c383e6ca639e7289a904c4ad19923956ffb0f9ad2ab1dbce393c128dc" Feb 14 10:58:01 crc kubenswrapper[4736]: E0214 10:58:01.766534 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:88d2227c383e6ca639e7289a904c4ad19923956ffb0f9ad2ab1dbce393c128dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cd6qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-768c8b45bb-jbwwk_openstack-operators(d6185be6-e012-411d-9b85-c971e12aebbd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:01 crc kubenswrapper[4736]: E0214 10:58:01.767786 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" podUID="d6185be6-e012-411d-9b85-c971e12aebbd" Feb 14 10:58:02 crc kubenswrapper[4736]: E0214 10:58:02.084625 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:88d2227c383e6ca639e7289a904c4ad19923956ffb0f9ad2ab1dbce393c128dc\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" podUID="d6185be6-e012-411d-9b85-c971e12aebbd" Feb 14 10:58:02 crc kubenswrapper[4736]: E0214 10:58:02.584155 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:6859cc0cf730f0780b90700447d91238a618c05465420960d07aa894abaf05e4" Feb 14 10:58:02 crc kubenswrapper[4736]: E0214 10:58:02.584357 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:6859cc0cf730f0780b90700447d91238a618c05465420960d07aa894abaf05e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vgnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6494cdbf8f-mt8zx_openstack-operators(55648e35-636d-4321-bdfe-e7171a70e87d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:02 crc kubenswrapper[4736]: E0214 10:58:02.585476 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" podUID="55648e35-636d-4321-bdfe-e7171a70e87d" Feb 14 10:58:03 crc kubenswrapper[4736]: E0214 10:58:03.097694 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:6859cc0cf730f0780b90700447d91238a618c05465420960d07aa894abaf05e4\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" podUID="55648e35-636d-4321-bdfe-e7171a70e87d" Feb 14 10:58:03 crc kubenswrapper[4736]: E0214 10:58:03.435573 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816" Feb 14 10:58:03 crc kubenswrapper[4736]: E0214 10:58:03.436119 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-85c99d655-9zxzs_openstack-operators(05c7d113-70d7-4bbf-9c0e-4981d602acd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:03 crc kubenswrapper[4736]: E0214 10:58:03.437452 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" podUID="05c7d113-70d7-4bbf-9c0e-4981d602acd3" Feb 14 10:58:04 crc kubenswrapper[4736]: E0214 10:58:04.103066 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" podUID="05c7d113-70d7-4bbf-9c0e-4981d602acd3" Feb 14 10:58:05 crc kubenswrapper[4736]: E0214 10:58:05.726221 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a" Feb 14 10:58:05 crc kubenswrapper[4736]: E0214 10:58:05.726879 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqpbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-54fb488b88-pcttg_openstack-operators(c9abe211-c0d9-4487-856f-12a41e4ad006): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:05 crc kubenswrapper[4736]: E0214 10:58:05.728397 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" podUID="c9abe211-c0d9-4487-856f-12a41e4ad006" Feb 14 10:58:06 crc kubenswrapper[4736]: E0214 10:58:06.114484 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" podUID="c9abe211-c0d9-4487-856f-12a41e4ad006" Feb 14 10:58:06 crc kubenswrapper[4736]: E0214 10:58:06.686987 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:a5396a8d7e5ca6ddabfa92744f0d4adab9de0bbe712e8cdab1bf13576b7ac8c8" Feb 14 10:58:06 crc kubenswrapper[4736]: E0214 10:58:06.687234 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:a5396a8d7e5ca6ddabfa92744f0d4adab9de0bbe712e8cdab1bf13576b7ac8c8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9x9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-76fd76856-dmpmv_openstack-operators(07e92003-0bdf-4e0b-a35c-d8f96e3a57f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:06 crc kubenswrapper[4736]: E0214 10:58:06.688651 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" podUID="07e92003-0bdf-4e0b-a35c-d8f96e3a57f8" Feb 14 10:58:07 crc kubenswrapper[4736]: E0214 10:58:07.124755 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:a5396a8d7e5ca6ddabfa92744f0d4adab9de0bbe712e8cdab1bf13576b7ac8c8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" podUID="07e92003-0bdf-4e0b-a35c-d8f96e3a57f8" Feb 14 10:58:07 crc kubenswrapper[4736]: E0214 10:58:07.299454 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:451de102ba7012200edcaea1cd9b1bc7fd3fbaac906ea0fb8e67a32c4e619a76" Feb 14 10:58:07 crc kubenswrapper[4736]: E0214 10:58:07.300001 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:451de102ba7012200edcaea1cd9b1bc7fd3fbaac906ea0fb8e67a32c4e619a76,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfxqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c469bc6bb-2p58b_openstack-operators(13ed197e-630c-4788-863e-23be47efe228): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:07 crc kubenswrapper[4736]: E0214 10:58:07.301233 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" podUID="13ed197e-630c-4788-863e-23be47efe228" Feb 14 10:58:08 crc kubenswrapper[4736]: E0214 10:58:08.128147 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:451de102ba7012200edcaea1cd9b1bc7fd3fbaac906ea0fb8e67a32c4e619a76\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" podUID="13ed197e-630c-4788-863e-23be47efe228" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.043020 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:4739c84b458c57b2434ee17e8029bbf3c8daf2a9ee3253fb385bbb4925ee8acd" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.043249 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:4739c84b458c57b2434ee17e8029bbf3c8daf2a9ee3253fb385bbb4925ee8acd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nlvh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-66997756f6-p2d5f_openstack-operators(c2104410-cd10-43d8-84d1-8cd837d65ed4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.044501 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" podUID="c2104410-cd10-43d8-84d1-8cd837d65ed4" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.139796 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:4739c84b458c57b2434ee17e8029bbf3c8daf2a9ee3253fb385bbb4925ee8acd\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" podUID="c2104410-cd10-43d8-84d1-8cd837d65ed4" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.646577 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.647031 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxrj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-qr776_openstack-operators(692863b5-b658-4d50-928e-b5357a279851): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:09 crc kubenswrapper[4736]: E0214 10:58:09.648206 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" podUID="692863b5-b658-4d50-928e-b5357a279851" Feb 14 10:58:10 crc kubenswrapper[4736]: E0214 10:58:10.143971 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" podUID="692863b5-b658-4d50-928e-b5357a279851" Feb 14 10:58:10 crc kubenswrapper[4736]: E0214 10:58:10.320400 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 14 10:58:10 crc kubenswrapper[4736]: E0214 10:58:10.320657 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztwcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-spx2d_openstack-operators(fc679b24-ad26-46c8-8d9e-28ef80a48090): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:10 crc kubenswrapper[4736]: E0214 10:58:10.322734 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podUID="fc679b24-ad26-46c8-8d9e-28ef80a48090" Feb 14 10:58:10 crc kubenswrapper[4736]: I0214 10:58:10.997692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:58:11 crc kubenswrapper[4736]: I0214 10:58:11.004066 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18979fdb-9863-4a61-a6cc-5984b041d7c6-webhook-certs\") pod \"openstack-operator-controller-manager-7f46fb7bd6-whwbk\" (UID: \"18979fdb-9863-4a61-a6cc-5984b041d7c6\") " pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:58:11 crc kubenswrapper[4736]: I0214 10:58:11.098553 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:58:16 crc kubenswrapper[4736]: E0214 10:58:16.440772 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc" Feb 14 10:58:16 crc kubenswrapper[4736]: E0214 10:58:16.442798 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms2qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-h52ld_openstack-operators(332bd6ec-7fc0-4c92-bd0e-491f238a8680): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:16 crc kubenswrapper[4736]: E0214 10:58:16.444419 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podUID="332bd6ec-7fc0-4c92-bd0e-491f238a8680" Feb 14 10:58:17 crc kubenswrapper[4736]: I0214 10:58:17.695689 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:58:17 crc kubenswrapper[4736]: I0214 10:58:17.695766 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:58:18 crc kubenswrapper[4736]: E0214 10:58:18.093496 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 14 10:58:18 crc kubenswrapper[4736]: E0214 10:58:18.093834 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56pkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4x6xz_openstack-operators(dd5e2ee2-c48c-40fd-9a02-ce871056600f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:58:18 crc kubenswrapper[4736]: E0214 10:58:18.098450 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" podUID="dd5e2ee2-c48c-40fd-9a02-ce871056600f" Feb 14 10:58:18 crc kubenswrapper[4736]: E0214 10:58:18.237087 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" podUID="dd5e2ee2-c48c-40fd-9a02-ce871056600f" Feb 14 10:58:18 crc kubenswrapper[4736]: I0214 10:58:18.553025 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq"] Feb 14 10:58:18 crc kubenswrapper[4736]: W0214 10:58:18.582580 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod434321f7_faee_40e8_8d52_6c863d100da6.slice/crio-124ef0efdb42d4ea6bdf68831ce99f784a4f3ab05fdfed7c9c22100d2ab20fc0 WatchSource:0}: Error finding container 124ef0efdb42d4ea6bdf68831ce99f784a4f3ab05fdfed7c9c22100d2ab20fc0: Status 404 returned error can't find the container with id 124ef0efdb42d4ea6bdf68831ce99f784a4f3ab05fdfed7c9c22100d2ab20fc0 Feb 14 10:58:18 crc kubenswrapper[4736]: I0214 10:58:18.616164 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt"] Feb 14 10:58:18 crc kubenswrapper[4736]: W0214 10:58:18.676471 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4800ac63_235a_4486_a61b_018e85369028.slice/crio-3cdc4bbe149e5a534679d3de1f045ad2b98f840b5208da28ed7c2f12b5b3a068 WatchSource:0}: Error finding container 3cdc4bbe149e5a534679d3de1f045ad2b98f840b5208da28ed7c2f12b5b3a068: Status 404 returned error can't find the container with id 3cdc4bbe149e5a534679d3de1f045ad2b98f840b5208da28ed7c2f12b5b3a068 Feb 14 10:58:18 crc kubenswrapper[4736]: I0214 10:58:18.722289 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk"] Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.214306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" event={"ID":"4800ac63-235a-4486-a61b-018e85369028","Type":"ContainerStarted","Data":"3cdc4bbe149e5a534679d3de1f045ad2b98f840b5208da28ed7c2f12b5b3a068"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.216529 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" event={"ID":"bd3596d4-d10d-45e0-b236-d0cca28bc09b","Type":"ContainerStarted","Data":"78fca6beb07c73a2ac9c23580bdfc464e4b2e2f367cbf8a211e3a7faa606fb1d"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.216731 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.218039 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" event={"ID":"434321f7-faee-40e8-8d52-6c863d100da6","Type":"ContainerStarted","Data":"124ef0efdb42d4ea6bdf68831ce99f784a4f3ab05fdfed7c9c22100d2ab20fc0"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.219264 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" event={"ID":"8b8b4f4d-ca75-4127-bf64-3db5839a9ccb","Type":"ContainerStarted","Data":"2f6d7294d0e32ed8c146d15a9fe1fb87c554b85bb19c7db772cbb9c27a5ddfde"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.219834 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.221282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" event={"ID":"18979fdb-9863-4a61-a6cc-5984b041d7c6","Type":"ContainerStarted","Data":"e63430da09f4d3f783a079def9b1be4f028433905e163443bc3112025a607bde"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.222732 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" event={"ID":"6a8d2df6-3e2b-4120-8848-9ab5ae903da5","Type":"ContainerStarted","Data":"9fd65cecade00a8472b9d60854104087942923ba89c78da1f81ee6bd4473bd79"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.223177 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.224831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" event={"ID":"d6185be6-e012-411d-9b85-c971e12aebbd","Type":"ContainerStarted","Data":"e87405de98ed2ac8295fb6d526ea57ea33d35e2693fb5f7b2fea13cdf57a7ad9"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.225052 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.225906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" event={"ID":"55648e35-636d-4321-bdfe-e7171a70e87d","Type":"ContainerStarted","Data":"308f66896cb88095659829dcac16c8e41449253ea92d3dd5c02cd8c46e7c6089"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.226133 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.227167 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" event={"ID":"0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1","Type":"ContainerStarted","Data":"a44b102f81e88f5953fc39d032904015c595ab36f538001906e2ef41f949fa7a"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.227319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.228335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" event={"ID":"f0eae102-9c64-42bb-b7eb-64c54f3bf219","Type":"ContainerStarted","Data":"c3696d57f305394a41c8520661d53df39ac8d7169280f3bb44b321e56a735f25"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.228535 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.229566 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" event={"ID":"4b0b03c4-b031-408b-a6de-8b3af1064ebd","Type":"ContainerStarted","Data":"187fdc6dfe63ca0403d55249dd649a742945e8dd89915366708f87bc4505d1b1"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.229623 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.231221 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" event={"ID":"a1aa4225-909d-49ae-8ac7-d987a760f2d2","Type":"ContainerStarted","Data":"fd7b921a01ae82d44b5a3c9c7e23bd15e798c0fc8741b7ec1972568063f3e704"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.231536 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.232532 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" event={"ID":"c9abe211-c0d9-4487-856f-12a41e4ad006","Type":"ContainerStarted","Data":"ea3a27a029a8a8dec6e9dbbbc1cfbe3025b33ce72faae2f49ce79e2403af3eb4"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.232673 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.233614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" event={"ID":"049efcc4-9d6e-47ff-8476-a29e06c6f362","Type":"ContainerStarted","Data":"f1c9abc01cade8f734c632b680421e0f47c60bc4f9c42e7d0312e15b7201af6d"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.233703 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.234962 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" event={"ID":"391b46e9-4f14-4b12-9c9a-800eecfc51af","Type":"ContainerStarted","Data":"0d80a94cc516e0d3b1998135fc3b1603a60f3530a8c95d61fe2eb79cf850225b"} Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.235268 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.251365 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" podStartSLOduration=3.76884003 podStartE2EDuration="41.251350571s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.645174245 +0000 UTC m=+971.013801613" lastFinishedPulling="2026-02-14 10:58:18.127684786 +0000 UTC m=+1008.496312154" observedRunningTime="2026-02-14 10:58:19.249352874 +0000 UTC m=+1009.617980242" watchObservedRunningTime="2026-02-14 10:58:19.251350571 +0000 UTC m=+1009.619977929" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.278503 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" podStartSLOduration=11.499114311 podStartE2EDuration="42.278486227s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.529128947 +0000 UTC m=+969.897756305" lastFinishedPulling="2026-02-14 10:58:10.308500853 +0000 UTC m=+1000.677128221" observedRunningTime="2026-02-14 10:58:19.275920554 +0000 UTC m=+1009.644547922" watchObservedRunningTime="2026-02-14 10:58:19.278486227 +0000 UTC m=+1009.647113595" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.313369 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" podStartSLOduration=3.22165654 podStartE2EDuration="41.313352535s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.232200726 +0000 UTC m=+970.600828094" lastFinishedPulling="2026-02-14 10:58:18.323896721 +0000 UTC m=+1008.692524089" observedRunningTime="2026-02-14 10:58:19.311337958 +0000 UTC m=+1009.679965326" watchObservedRunningTime="2026-02-14 10:58:19.313352535 +0000 UTC m=+1009.681979903" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.401045 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" podStartSLOduration=3.9995697630000002 podStartE2EDuration="42.401027943s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.686480229 +0000 UTC m=+970.055107597" lastFinishedPulling="2026-02-14 10:58:18.087938389 +0000 UTC m=+1008.456565777" observedRunningTime="2026-02-14 10:58:19.399771077 +0000 UTC m=+1009.768398455" watchObservedRunningTime="2026-02-14 10:58:19.401027943 +0000 UTC m=+1009.769655311" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.401332 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" podStartSLOduration=3.627554817 podStartE2EDuration="42.401326812s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.54991996 +0000 UTC m=+969.918547328" lastFinishedPulling="2026-02-14 10:58:18.323691945 +0000 UTC m=+1008.692319323" observedRunningTime="2026-02-14 10:58:19.377957303 +0000 UTC m=+1009.746584691" watchObservedRunningTime="2026-02-14 10:58:19.401326812 +0000 UTC m=+1009.769954180" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.448122 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" podStartSLOduration=3.93904629 podStartE2EDuration="42.4481053s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.973485591 +0000 UTC m=+970.342112959" lastFinishedPulling="2026-02-14 10:58:18.482544601 +0000 UTC m=+1008.851171969" observedRunningTime="2026-02-14 10:58:19.44493343 +0000 UTC m=+1009.813560798" watchObservedRunningTime="2026-02-14 10:58:19.4481053 +0000 UTC m=+1009.816732668" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.473916 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" podStartSLOduration=11.532927616 podStartE2EDuration="42.473895968s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.956970209 +0000 UTC m=+970.325597577" lastFinishedPulling="2026-02-14 10:58:10.897938551 +0000 UTC m=+1001.266565929" observedRunningTime="2026-02-14 10:58:19.472989122 +0000 UTC m=+1009.841616490" watchObservedRunningTime="2026-02-14 10:58:19.473895968 +0000 UTC m=+1009.842523346" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.514916 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" podStartSLOduration=3.798402948 podStartE2EDuration="42.514896502s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.666499018 +0000 UTC m=+970.035126386" lastFinishedPulling="2026-02-14 10:58:18.382992572 +0000 UTC m=+1008.751619940" observedRunningTime="2026-02-14 10:58:19.510890567 +0000 UTC m=+1009.879517955" watchObservedRunningTime="2026-02-14 10:58:19.514896502 +0000 UTC m=+1009.883523870" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.633929 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" podStartSLOduration=4.2219029710000004 podStartE2EDuration="41.633912117s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.679041932 +0000 UTC m=+971.047669300" lastFinishedPulling="2026-02-14 10:58:18.091051078 +0000 UTC m=+1008.459678446" observedRunningTime="2026-02-14 10:58:19.563983796 +0000 UTC m=+1009.932611164" watchObservedRunningTime="2026-02-14 10:58:19.633912117 +0000 UTC m=+1010.002539485" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.715178 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" podStartSLOduration=3.465931791 podStartE2EDuration="41.715157752s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.209329933 +0000 UTC m=+970.577957301" lastFinishedPulling="2026-02-14 10:58:18.458555894 +0000 UTC m=+1008.827183262" observedRunningTime="2026-02-14 10:58:19.634881055 +0000 UTC m=+1010.003508413" watchObservedRunningTime="2026-02-14 10:58:19.715157752 +0000 UTC m=+1010.083785120" Feb 14 10:58:19 crc kubenswrapper[4736]: I0214 10:58:19.716967 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" podStartSLOduration=10.488712195 podStartE2EDuration="41.716959134s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.198244787 +0000 UTC m=+970.566872155" lastFinishedPulling="2026-02-14 10:58:11.426491726 +0000 UTC m=+1001.795119094" observedRunningTime="2026-02-14 10:58:19.711629701 +0000 UTC m=+1010.080257069" watchObservedRunningTime="2026-02-14 10:58:19.716959134 +0000 UTC m=+1010.085586502" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.256695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" event={"ID":"05c7d113-70d7-4bbf-9c0e-4981d602acd3","Type":"ContainerStarted","Data":"8301e75ad71bb1c208f2a3b291a0341138d66c826e2f1e233b9e83976d98593c"} Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.257184 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.270980 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" event={"ID":"13ed197e-630c-4788-863e-23be47efe228","Type":"ContainerStarted","Data":"5b3e4c58d9b9697c254d5bd622a31953d0fd6fc5202450391895c11f10ec600e"} Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.271229 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.278200 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" event={"ID":"18979fdb-9863-4a61-a6cc-5984b041d7c6","Type":"ContainerStarted","Data":"a6e271902a326f15d254a278daccfa68321172e5f3bc337ad08b7ffdba05bfdf"} Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.278364 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.280030 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" event={"ID":"07e92003-0bdf-4e0b-a35c-d8f96e3a57f8","Type":"ContainerStarted","Data":"f1c5a22c2a0eb4835864d6a7d68d39a6228ee374f6a8eb454892bf46ac495f78"} Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.282225 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" podStartSLOduration=12.04741366 podStartE2EDuration="43.282211729s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:57:39.666804807 +0000 UTC m=+970.035432175" lastFinishedPulling="2026-02-14 10:58:10.901602876 +0000 UTC m=+1001.270230244" observedRunningTime="2026-02-14 10:58:19.751917584 +0000 UTC m=+1010.120544942" watchObservedRunningTime="2026-02-14 10:58:20.282211729 +0000 UTC m=+1010.650839087" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.349480 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" podStartSLOduration=3.348990183 podStartE2EDuration="42.349466414s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.401405877 +0000 UTC m=+970.770033245" lastFinishedPulling="2026-02-14 10:58:19.401882108 +0000 UTC m=+1009.770509476" observedRunningTime="2026-02-14 10:58:20.288072727 +0000 UTC m=+1010.656700095" watchObservedRunningTime="2026-02-14 10:58:20.349466414 +0000 UTC m=+1010.718093782" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.350156 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" podStartSLOduration=42.350152123 podStartE2EDuration="42.350152123s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:58:20.348111815 +0000 UTC m=+1010.716739183" watchObservedRunningTime="2026-02-14 10:58:20.350152123 +0000 UTC m=+1010.718779491" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.384323 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" podStartSLOduration=3.59932246 podStartE2EDuration="42.38429001s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.619044899 +0000 UTC m=+970.987672267" lastFinishedPulling="2026-02-14 10:58:19.404012449 +0000 UTC m=+1009.772639817" observedRunningTime="2026-02-14 10:58:20.383891119 +0000 UTC m=+1010.752518487" watchObservedRunningTime="2026-02-14 10:58:20.38429001 +0000 UTC m=+1010.752917378" Feb 14 10:58:20 crc kubenswrapper[4736]: I0214 10:58:20.419393 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" podStartSLOduration=3.235992211 podStartE2EDuration="42.419379894s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.218499545 +0000 UTC m=+970.587126913" lastFinishedPulling="2026-02-14 10:58:19.401887228 +0000 UTC m=+1009.770514596" observedRunningTime="2026-02-14 10:58:20.414422573 +0000 UTC m=+1010.783049941" watchObservedRunningTime="2026-02-14 10:58:20.419379894 +0000 UTC m=+1010.788007262" Feb 14 10:58:21 crc kubenswrapper[4736]: E0214 10:58:21.405664 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podUID="fc679b24-ad26-46c8-8d9e-28ef80a48090" Feb 14 10:58:23 crc kubenswrapper[4736]: I0214 10:58:23.401500 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.308736 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" event={"ID":"4800ac63-235a-4486-a61b-018e85369028","Type":"ContainerStarted","Data":"a94e7238f64d8aefea4e7a9c7f2369982acedcd744bf17edb5d0fd7c050a3cb1"} Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.309512 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.310831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" event={"ID":"434321f7-faee-40e8-8d52-6c863d100da6","Type":"ContainerStarted","Data":"71595c673c481c59da3ed2f896f8f3b271fae7f450cc4a2cb8a4d3a9f0913404"} Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.311148 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.312490 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" event={"ID":"c2104410-cd10-43d8-84d1-8cd837d65ed4","Type":"ContainerStarted","Data":"65f96aa56337eba9da75f8f19080f95855cf7438c3f459672f8364a23ff34b24"} Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.312653 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.341579 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" podStartSLOduration=41.259261297 podStartE2EDuration="46.341556751s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:58:18.700508188 +0000 UTC m=+1009.069135556" lastFinishedPulling="2026-02-14 10:58:23.782803632 +0000 UTC m=+1014.151431010" observedRunningTime="2026-02-14 10:58:24.338266777 +0000 UTC m=+1014.706894155" watchObservedRunningTime="2026-02-14 10:58:24.341556751 +0000 UTC m=+1014.710184119" Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.375407 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" podStartSLOduration=42.194847014 podStartE2EDuration="47.375388859s" podCreationTimestamp="2026-02-14 10:57:37 +0000 UTC" firstStartedPulling="2026-02-14 10:58:18.595532844 +0000 UTC m=+1008.964160212" lastFinishedPulling="2026-02-14 10:58:23.776074679 +0000 UTC m=+1014.144702057" observedRunningTime="2026-02-14 10:58:24.369921303 +0000 UTC m=+1014.738548671" watchObservedRunningTime="2026-02-14 10:58:24.375388859 +0000 UTC m=+1014.744016227" Feb 14 10:58:24 crc kubenswrapper[4736]: I0214 10:58:24.384031 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" podStartSLOduration=2.853740264 podStartE2EDuration="46.384009576s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.350811142 +0000 UTC m=+970.719438500" lastFinishedPulling="2026-02-14 10:58:23.881080434 +0000 UTC m=+1014.249707812" observedRunningTime="2026-02-14 10:58:24.382894304 +0000 UTC m=+1014.751521702" watchObservedRunningTime="2026-02-14 10:58:24.384009576 +0000 UTC m=+1014.752636954" Feb 14 10:58:26 crc kubenswrapper[4736]: I0214 10:58:26.326822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" event={"ID":"692863b5-b658-4d50-928e-b5357a279851","Type":"ContainerStarted","Data":"37a2259b345c905f0a94140781c00e1b8fe6396d9fd690f4c05403c3e4128726"} Feb 14 10:58:26 crc kubenswrapper[4736]: I0214 10:58:26.327070 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:58:26 crc kubenswrapper[4736]: I0214 10:58:26.345641 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" podStartSLOduration=2.717901935 podStartE2EDuration="48.34562593s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.390407923 +0000 UTC m=+970.759035291" lastFinishedPulling="2026-02-14 10:58:26.018131868 +0000 UTC m=+1016.386759286" observedRunningTime="2026-02-14 10:58:26.341314616 +0000 UTC m=+1016.709941994" watchObservedRunningTime="2026-02-14 10:58:26.34562593 +0000 UTC m=+1016.714253298" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.101129 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-vg9f9" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.116790 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-jbwwk" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.184979 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68fd459cc4-lwpwl" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.196382 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddq5f" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.290219 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-pcttg" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.358394 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-mt8zx" Feb 14 10:58:28 crc kubenswrapper[4736]: E0214 10:58:28.430119 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podUID="332bd6ec-7fc0-4c92-bd0e-491f238a8680" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.529133 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.531548 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-76fd76856-dmpmv" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.551325 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-g7wc9" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.564005 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-7b9sw" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.647350 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-ptgcj" Feb 14 10:58:28 crc kubenswrapper[4736]: I0214 10:58:28.901494 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-cztvd" Feb 14 10:58:29 crc kubenswrapper[4736]: I0214 10:58:29.053995 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-pqhqz" Feb 14 10:58:29 crc kubenswrapper[4736]: I0214 10:58:29.252114 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-9zxzs" Feb 14 10:58:29 crc kubenswrapper[4736]: I0214 10:58:29.286432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c469bc6bb-2p58b" Feb 14 10:58:29 crc kubenswrapper[4736]: I0214 10:58:29.436261 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-9dgfx" Feb 14 10:58:31 crc kubenswrapper[4736]: I0214 10:58:31.107831 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7f46fb7bd6-whwbk" Feb 14 10:58:31 crc kubenswrapper[4736]: I0214 10:58:31.377290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" event={"ID":"dd5e2ee2-c48c-40fd-9a02-ce871056600f","Type":"ContainerStarted","Data":"a165bd3bc7b2a08c7f0a01915be401a2ee6a49f30e77bd4da73d6fe560c6848b"} Feb 14 10:58:31 crc kubenswrapper[4736]: I0214 10:58:31.393395 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4x6xz" podStartSLOduration=3.901710027 podStartE2EDuration="53.393372255s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.874982396 +0000 UTC m=+971.243609754" lastFinishedPulling="2026-02-14 10:58:30.366644604 +0000 UTC m=+1020.735271982" observedRunningTime="2026-02-14 10:58:31.391392068 +0000 UTC m=+1021.760019466" watchObservedRunningTime="2026-02-14 10:58:31.393372255 +0000 UTC m=+1021.761999633" Feb 14 10:58:34 crc kubenswrapper[4736]: I0214 10:58:34.207826 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-6wjlq" Feb 14 10:58:34 crc kubenswrapper[4736]: I0214 10:58:34.584049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt" Feb 14 10:58:37 crc kubenswrapper[4736]: I0214 10:58:37.423029 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" event={"ID":"fc679b24-ad26-46c8-8d9e-28ef80a48090","Type":"ContainerStarted","Data":"9e0472cdd1a360d8394edb1c165059b242a046b472454eb4b78d1cca3ad9cb47"} Feb 14 10:58:37 crc kubenswrapper[4736]: I0214 10:58:37.423528 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:58:37 crc kubenswrapper[4736]: I0214 10:58:37.445601 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" podStartSLOduration=3.040842162 podStartE2EDuration="59.445585474s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.434057009 +0000 UTC m=+970.802684377" lastFinishedPulling="2026-02-14 10:58:36.838800321 +0000 UTC m=+1027.207427689" observedRunningTime="2026-02-14 10:58:37.445505212 +0000 UTC m=+1027.814132630" watchObservedRunningTime="2026-02-14 10:58:37.445585474 +0000 UTC m=+1027.814212842" Feb 14 10:58:38 crc kubenswrapper[4736]: I0214 10:58:38.590386 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-p2d5f" Feb 14 10:58:39 crc kubenswrapper[4736]: I0214 10:58:39.246733 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-qr776" Feb 14 10:58:40 crc kubenswrapper[4736]: I0214 10:58:40.446938 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" event={"ID":"332bd6ec-7fc0-4c92-bd0e-491f238a8680","Type":"ContainerStarted","Data":"7a3d3dc171c71cc6b6907230d038761a4833b5cdd6a4ad1115d65b316b9aec79"} Feb 14 10:58:40 crc kubenswrapper[4736]: I0214 10:58:40.447361 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:58:47 crc kubenswrapper[4736]: I0214 10:58:47.695489 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 10:58:47 crc kubenswrapper[4736]: I0214 10:58:47.696551 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 10:58:47 crc kubenswrapper[4736]: I0214 10:58:47.696616 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 10:58:47 crc kubenswrapper[4736]: I0214 10:58:47.697511 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 10:58:47 crc kubenswrapper[4736]: I0214 10:58:47.697608 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715" gracePeriod=600 Feb 14 10:58:48 crc kubenswrapper[4736]: I0214 10:58:48.514686 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715" exitCode=0 Feb 14 10:58:48 crc kubenswrapper[4736]: I0214 10:58:48.514794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715"} Feb 14 10:58:48 crc kubenswrapper[4736]: I0214 10:58:48.515093 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a"} Feb 14 10:58:48 crc kubenswrapper[4736]: I0214 10:58:48.515124 4736 scope.go:117] "RemoveContainer" containerID="8b08eeda0c39616325bfc380aaaad11c6609c5f301d0c07f4fa3e51c6e12894e" Feb 14 10:58:48 crc kubenswrapper[4736]: I0214 10:58:48.551391 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" podStartSLOduration=11.209485889 podStartE2EDuration="1m10.551372597s" podCreationTimestamp="2026-02-14 10:57:38 +0000 UTC" firstStartedPulling="2026-02-14 10:57:40.678898628 +0000 UTC m=+971.047525996" lastFinishedPulling="2026-02-14 10:58:40.020785336 +0000 UTC m=+1030.389412704" observedRunningTime="2026-02-14 10:58:40.463212287 +0000 UTC m=+1030.831839665" watchObservedRunningTime="2026-02-14 10:58:48.551372597 +0000 UTC m=+1038.919999975" Feb 14 10:58:49 crc kubenswrapper[4736]: I0214 10:58:49.145804 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-spx2d" Feb 14 10:58:49 crc kubenswrapper[4736]: I0214 10:58:49.480320 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-h52ld" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.044253 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:07 crc kubenswrapper[4736]: E0214 10:59:07.047724 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="registry-server" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.047956 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="registry-server" Feb 14 10:59:07 crc kubenswrapper[4736]: E0214 10:59:07.048159 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="extract-utilities" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.048274 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="extract-utilities" Feb 14 10:59:07 crc kubenswrapper[4736]: E0214 10:59:07.048389 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="extract-content" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.048482 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="extract-content" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.048860 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="701ac0f0-8351-4ae8-b5cf-e4be16f58a64" containerName="registry-server" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.053633 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.057281 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.059886 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.061731 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dlmgq" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.062002 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.062460 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.138210 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.139523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.150090 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.156522 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.189110 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hzw\" (UniqueName: \"kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.189233 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.290325 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.290376 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.290415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.290461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7tm\" (UniqueName: \"kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.290484 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7hzw\" (UniqueName: \"kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.292394 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.312842 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7hzw\" (UniqueName: \"kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw\") pod \"dnsmasq-dns-675f4bcbfc-xfvpb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.377671 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.394234 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7tm\" (UniqueName: \"kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.394353 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.394393 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.395588 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.395777 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.428909 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7tm\" (UniqueName: \"kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm\") pod \"dnsmasq-dns-78dd6ddcc-4zptc\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.453913 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:07 crc kubenswrapper[4736]: W0214 10:59:07.832647 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32a9ad7a_2da6_4bcf_8bf8_c4de3ccdaabb.slice/crio-61b37e51e84513d012750d6b5d6b552b7021b051b8310c07e0081077b8ad4308 WatchSource:0}: Error finding container 61b37e51e84513d012750d6b5d6b552b7021b051b8310c07e0081077b8ad4308: Status 404 returned error can't find the container with id 61b37e51e84513d012750d6b5d6b552b7021b051b8310c07e0081077b8ad4308 Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.834252 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:07 crc kubenswrapper[4736]: W0214 10:59:07.943922 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3765fe0e_e6fd_49a8_9cf1_dd02a29a99c9.slice/crio-084cb8926eebc5b7ea2ba63f00e07f2437ad708cdec2825f7b26e61a83b32552 WatchSource:0}: Error finding container 084cb8926eebc5b7ea2ba63f00e07f2437ad708cdec2825f7b26e61a83b32552: Status 404 returned error can't find the container with id 084cb8926eebc5b7ea2ba63f00e07f2437ad708cdec2825f7b26e61a83b32552 Feb 14 10:59:07 crc kubenswrapper[4736]: I0214 10:59:07.945001 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:08 crc kubenswrapper[4736]: I0214 10:59:08.664175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" event={"ID":"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb","Type":"ContainerStarted","Data":"61b37e51e84513d012750d6b5d6b552b7021b051b8310c07e0081077b8ad4308"} Feb 14 10:59:08 crc kubenswrapper[4736]: I0214 10:59:08.666130 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" event={"ID":"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9","Type":"ContainerStarted","Data":"084cb8926eebc5b7ea2ba63f00e07f2437ad708cdec2825f7b26e61a83b32552"} Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.616811 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.657261 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.658392 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.669425 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.729242 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6cd\" (UniqueName: \"kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.729291 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.729331 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.831115 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.831241 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6cd\" (UniqueName: \"kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.831284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.832388 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.833061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.861233 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6cd\" (UniqueName: \"kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd\") pod \"dnsmasq-dns-666b6646f7-547fn\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:09 crc kubenswrapper[4736]: I0214 10:59:09.984136 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.139008 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.168025 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.169265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.224143 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.241200 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnwjz\" (UniqueName: \"kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.241295 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.241315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.344653 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnwjz\" (UniqueName: \"kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.344737 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.344769 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.345822 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.349461 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.393065 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnwjz\" (UniqueName: \"kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz\") pod \"dnsmasq-dns-57d769cc4f-dsl8v\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.489614 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.687468 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:10 crc kubenswrapper[4736]: W0214 10:59:10.779591 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7023bee_2454_47b8_b532_d02d6dbaf328.slice/crio-3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6 WatchSource:0}: Error finding container 3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6: Status 404 returned error can't find the container with id 3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6 Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.943599 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.945110 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.948404 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jb45p" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.948560 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.949791 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.950058 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.950158 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.950184 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.950256 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 10:59:10 crc kubenswrapper[4736]: I0214 10:59:10.958656 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.045101 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:11 crc kubenswrapper[4736]: W0214 10:59:11.061624 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5725a7cf_6124_4c35_910a_9c57203687fc.slice/crio-d147e988ebd96897b72db69c884adbb7090267a2d01a5af6b1790a04fdddb047 WatchSource:0}: Error finding container d147e988ebd96897b72db69c884adbb7090267a2d01a5af6b1790a04fdddb047: Status 404 returned error can't find the container with id d147e988ebd96897b72db69c884adbb7090267a2d01a5af6b1790a04fdddb047 Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079785 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079855 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079873 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079894 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079912 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079929 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079951 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079964 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.079982 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njjl\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.080000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.080033 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181194 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181237 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181270 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181367 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njjl\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181418 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181466 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181617 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.181696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.182059 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.182965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.183676 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.184477 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.186557 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.188253 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.191166 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.192615 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.197332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njjl\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.206239 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.267058 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.338589 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.340023 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.343466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.343737 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.343881 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.344039 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.344211 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.344319 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.344577 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c8fbh" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.351049 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486101 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486220 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486264 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdp6r\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486408 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486429 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486469 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.486580 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.590862 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.590909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.590942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.590973 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591003 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591028 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591073 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdp6r\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591149 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591176 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.591203 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.593577 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.593621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.595027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.595242 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.595919 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.604290 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.604952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.607207 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.639538 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.640854 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.656949 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.658674 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdp6r\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.681168 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.833117 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-547fn" event={"ID":"c7023bee-2454-47b8-b532-d02d6dbaf328","Type":"ContainerStarted","Data":"3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6"} Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.834882 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" event={"ID":"5725a7cf-6124-4c35-910a-9c57203687fc","Type":"ContainerStarted","Data":"d147e988ebd96897b72db69c884adbb7090267a2d01a5af6b1790a04fdddb047"} Feb 14 10:59:11 crc kubenswrapper[4736]: I0214 10:59:11.961061 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.261128 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 10:59:12 crc kubenswrapper[4736]: W0214 10:59:12.276563 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb03a69_d572_4b83_97b9_13d33b501b6a.slice/crio-f643b23df5c20f3673c9140b918b909af49890b9fb4f92fcf7f2306957083983 WatchSource:0}: Error finding container f643b23df5c20f3673c9140b918b909af49890b9fb4f92fcf7f2306957083983: Status 404 returned error can't find the container with id f643b23df5c20f3673c9140b918b909af49890b9fb4f92fcf7f2306957083983 Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.692020 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.693322 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.695108 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.696114 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q4xmx" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.696340 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.696992 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.707017 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.725406 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.827728 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828055 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rq8g\" (UniqueName: \"kubernetes.io/projected/f077df65-4b06-4908-87bb-d08572879c62-kube-api-access-4rq8g\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828112 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-config-data-default\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f077df65-4b06-4908-87bb-d08572879c62-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-kolla-config\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828189 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828217 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.828567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.873896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerStarted","Data":"f643b23df5c20f3673c9140b918b909af49890b9fb4f92fcf7f2306957083983"} Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.893382 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerStarted","Data":"d60b58724c61f32a700d07a7bcb818986e8364965d0b0172d0db24041cfed337"} Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931143 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931202 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rq8g\" (UniqueName: \"kubernetes.io/projected/f077df65-4b06-4908-87bb-d08572879c62-kube-api-access-4rq8g\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931274 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-config-data-default\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931344 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f077df65-4b06-4908-87bb-d08572879c62-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931371 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931390 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-kolla-config\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931424 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.931454 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.937652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f077df65-4b06-4908-87bb-d08572879c62-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.938518 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-kolla-config\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.938600 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-config-data-default\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.938870 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.940847 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f077df65-4b06-4908-87bb-d08572879c62-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.958424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.962436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f077df65-4b06-4908-87bb-d08572879c62-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.971763 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rq8g\" (UniqueName: \"kubernetes.io/projected/f077df65-4b06-4908-87bb-d08572879c62-kube-api-access-4rq8g\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:12 crc kubenswrapper[4736]: I0214 10:59:12.997166 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"f077df65-4b06-4908-87bb-d08572879c62\") " pod="openstack/openstack-galera-0" Feb 14 10:59:13 crc kubenswrapper[4736]: I0214 10:59:13.036704 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.024806 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.027656 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.076518 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cqq8d" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.076986 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.077398 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.077988 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.084851 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170034 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170094 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170175 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170197 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxkk\" (UniqueName: \"kubernetes.io/projected/e3a11355-8757-409d-b440-6b1a372ddd72-kube-api-access-djxkk\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170238 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170325 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.170346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272313 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272354 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272374 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkk\" (UniqueName: \"kubernetes.io/projected/e3a11355-8757-409d-b440-6b1a372ddd72-kube-api-access-djxkk\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272418 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.272469 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.275185 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.277096 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.280425 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.282120 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.282972 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a11355-8757-409d-b440-6b1a372ddd72-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.286269 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.291547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a11355-8757-409d-b440-6b1a372ddd72-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.300903 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkk\" (UniqueName: \"kubernetes.io/projected/e3a11355-8757-409d-b440-6b1a372ddd72-kube-api-access-djxkk\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.310737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3a11355-8757-409d-b440-6b1a372ddd72\") " pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.399940 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.402492 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.409336 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.411613 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lblrx" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.411639 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.411886 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.442254 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.476978 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.477164 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz65h\" (UniqueName: \"kubernetes.io/projected/0a412d55-7134-4b50-b303-3174348c85fa-kube-api-access-sz65h\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.477558 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-config-data\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.477866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-kolla-config\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.477935 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.579975 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-kolla-config\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.580034 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.580080 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.580128 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz65h\" (UniqueName: \"kubernetes.io/projected/0a412d55-7134-4b50-b303-3174348c85fa-kube-api-access-sz65h\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.580147 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-config-data\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.581142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-config-data\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.581727 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a412d55-7134-4b50-b303-3174348c85fa-kolla-config\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.589389 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.605869 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz65h\" (UniqueName: \"kubernetes.io/projected/0a412d55-7134-4b50-b303-3174348c85fa-kube-api-access-sz65h\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.608048 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a412d55-7134-4b50-b303-3174348c85fa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0a412d55-7134-4b50-b303-3174348c85fa\") " pod="openstack/memcached-0" Feb 14 10:59:14 crc kubenswrapper[4736]: I0214 10:59:14.738468 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.660854 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.662337 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.666294 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h8js4" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.686464 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.721721 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5l8f\" (UniqueName: \"kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f\") pod \"kube-state-metrics-0\" (UID: \"6871c749-b4c2-4a16-8322-aa4384a1b86b\") " pod="openstack/kube-state-metrics-0" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.823525 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5l8f\" (UniqueName: \"kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f\") pod \"kube-state-metrics-0\" (UID: \"6871c749-b4c2-4a16-8322-aa4384a1b86b\") " pod="openstack/kube-state-metrics-0" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.858213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5l8f\" (UniqueName: \"kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f\") pod \"kube-state-metrics-0\" (UID: \"6871c749-b4c2-4a16-8322-aa4384a1b86b\") " pod="openstack/kube-state-metrics-0" Feb 14 10:59:16 crc kubenswrapper[4736]: I0214 10:59:16.981584 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.756997 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-msd5j"] Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.758155 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.766097 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.766176 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gz5wc" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.766282 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.786523 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j"] Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.844616 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6dm75"] Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.845955 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-scripts\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-ovn-controller-tls-certs\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900661 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-combined-ca-bundle\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hs2\" (UniqueName: \"kubernetes.io/projected/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-kube-api-access-t4hs2\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900722 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900847 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-log-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.900895 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:19 crc kubenswrapper[4736]: I0214 10:59:19.906652 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6dm75"] Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.001962 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6k8\" (UniqueName: \"kubernetes.io/projected/4dc5a707-dee1-457c-9100-e80b9eb96f6c-kube-api-access-gl6k8\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002013 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-log-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002050 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-lib\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002138 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002192 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-scripts\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002232 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-ovn-controller-tls-certs\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002258 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-run\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002296 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-combined-ca-bundle\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002315 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hs2\" (UniqueName: \"kubernetes.io/projected/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-kube-api-access-t4hs2\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002368 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002412 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-log\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002442 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5a707-dee1-457c-9100-e80b9eb96f6c-scripts\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-etc-ovs\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002510 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-log-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.002766 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.004028 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-var-run-ovn\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.004418 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-scripts\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.020476 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-ovn-controller-tls-certs\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.022543 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-combined-ca-bundle\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.025326 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hs2\" (UniqueName: \"kubernetes.io/projected/2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba-kube-api-access-t4hs2\") pod \"ovn-controller-msd5j\" (UID: \"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba\") " pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.084070 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104101 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6k8\" (UniqueName: \"kubernetes.io/projected/4dc5a707-dee1-457c-9100-e80b9eb96f6c-kube-api-access-gl6k8\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-lib\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104203 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-run\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104249 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-log\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104274 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5a707-dee1-457c-9100-e80b9eb96f6c-scripts\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104296 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-etc-ovs\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104393 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-run\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104467 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-log\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104493 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-var-lib\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.104547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4dc5a707-dee1-457c-9100-e80b9eb96f6c-etc-ovs\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.106196 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5a707-dee1-457c-9100-e80b9eb96f6c-scripts\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.118698 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6k8\" (UniqueName: \"kubernetes.io/projected/4dc5a707-dee1-457c-9100-e80b9eb96f6c-kube-api-access-gl6k8\") pod \"ovn-controller-ovs-6dm75\" (UID: \"4dc5a707-dee1-457c-9100-e80b9eb96f6c\") " pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.204111 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.752727 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.754601 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.763464 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.780351 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.780676 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.780873 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mltrs" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.781156 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.781191 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920552 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920644 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920696 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920717 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntdh\" (UniqueName: \"kubernetes.io/projected/0705ea43-d70f-400e-ac09-07dbebf128ea-kube-api-access-bntdh\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920780 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:20 crc kubenswrapper[4736]: I0214 10:59:20.920994 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.023793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntdh\" (UniqueName: \"kubernetes.io/projected/0705ea43-d70f-400e-ac09-07dbebf128ea-kube-api-access-bntdh\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.023857 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.023912 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.023947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.023968 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.024010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.024051 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.024071 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.024349 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.025083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.025106 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.029227 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.030064 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0705ea43-d70f-400e-ac09-07dbebf128ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.042583 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.043197 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntdh\" (UniqueName: \"kubernetes.io/projected/0705ea43-d70f-400e-ac09-07dbebf128ea-kube-api-access-bntdh\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.044415 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0705ea43-d70f-400e-ac09-07dbebf128ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.045921 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0705ea43-d70f-400e-ac09-07dbebf128ea\") " pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:21 crc kubenswrapper[4736]: I0214 10:59:21.097364 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.371985 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.373567 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.376844 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.377041 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.377154 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-46v6v" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.377269 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.395825 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468408 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468487 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468510 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468614 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rts\" (UniqueName: \"kubernetes.io/projected/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-kube-api-access-f4rts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468730 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468842 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-config\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.468877 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570872 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570920 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570938 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570955 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570972 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rts\" (UniqueName: \"kubernetes.io/projected/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-kube-api-access-f4rts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.570992 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.571013 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-config\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.571035 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.571566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.571952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.572204 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.572322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-config\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.581840 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.585515 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.586174 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.595943 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rts\" (UniqueName: \"kubernetes.io/projected/828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18-kube-api-access-f4rts\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.633033 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18\") " pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:23 crc kubenswrapper[4736]: I0214 10:59:23.693185 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.313406 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.314150 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7hzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xfvpb_openstack(32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.315524 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" podUID="32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.396387 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.396552 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl7tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4zptc_openstack(3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.397876 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" podUID="3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.438490 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.438621 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnwjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-dsl8v_openstack(5725a7cf-6124-4c35-910a-9c57203687fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.439953 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" podUID="5725a7cf-6124-4c35-910a-9c57203687fc" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.445694 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.445874 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5g6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-547fn_openstack(c7023bee-2454-47b8-b532-d02d6dbaf328): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 10:59:32 crc kubenswrapper[4736]: E0214 10:59:32.447236 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-547fn" podUID="c7023bee-2454-47b8-b532-d02d6dbaf328" Feb 14 10:59:32 crc kubenswrapper[4736]: I0214 10:59:32.818145 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 10:59:32 crc kubenswrapper[4736]: I0214 10:59:32.940675 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j"] Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.040657 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.055030 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.069380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0a412d55-7134-4b50-b303-3174348c85fa","Type":"ContainerStarted","Data":"cbbe5712200b2de86e74baeb1ccaa849d07635b6d2688b2f831b9ae4b4381b49"} Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.070921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f077df65-4b06-4908-87bb-d08572879c62","Type":"ContainerStarted","Data":"f1fb01ab99dfce75d5bc818799b30aead49c3c82c6573cfd4cadf356fce466b6"} Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.077636 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3a11355-8757-409d-b440-6b1a372ddd72","Type":"ContainerStarted","Data":"1491cd67be0de9c3fd67b6e77d14009810209774fd4917fccd224d8129893897"} Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.078671 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j" event={"ID":"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba","Type":"ContainerStarted","Data":"e54047db4497e2150aacdd06ad2aea3c0502d275a77165d0aa6c7f25622ae020"} Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.091191 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 10:59:33 crc kubenswrapper[4736]: E0214 10:59:33.091434 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-547fn" podUID="c7023bee-2454-47b8-b532-d02d6dbaf328" Feb 14 10:59:33 crc kubenswrapper[4736]: E0214 10:59:33.091665 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" podUID="5725a7cf-6124-4c35-910a-9c57203687fc" Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.793488 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6dm75"] Feb 14 10:59:33 crc kubenswrapper[4736]: W0214 10:59:33.884400 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dc5a707_dee1_457c_9100_e80b9eb96f6c.slice/crio-1dc0988ca6b4d20af95564eeabd333a7bd9434534c8f608295a16fd7d6544658 WatchSource:0}: Error finding container 1dc0988ca6b4d20af95564eeabd333a7bd9434534c8f608295a16fd7d6544658: Status 404 returned error can't find the container with id 1dc0988ca6b4d20af95564eeabd333a7bd9434534c8f608295a16fd7d6544658 Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.966697 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:33 crc kubenswrapper[4736]: I0214 10:59:33.972832 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.086205 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7hzw\" (UniqueName: \"kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw\") pod \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.086264 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config\") pod \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\" (UID: \"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb\") " Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.086305 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc\") pod \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.086392 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7tm\" (UniqueName: \"kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm\") pod \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.086438 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config\") pod \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\" (UID: \"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9\") " Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.087161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config" (OuterVolumeSpecName: "config") pod "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9" (UID: "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.087479 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9" (UID: "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.087558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config" (OuterVolumeSpecName: "config") pod "32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb" (UID: "32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.089985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6dm75" event={"ID":"4dc5a707-dee1-457c-9100-e80b9eb96f6c","Type":"ContainerStarted","Data":"1dc0988ca6b4d20af95564eeabd333a7bd9434534c8f608295a16fd7d6544658"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.092175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6871c749-b4c2-4a16-8322-aa4384a1b86b","Type":"ContainerStarted","Data":"eea932f5779db3d5ede22bccc19211d06babed83ad990737b1a70736716ab211"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.092887 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw" (OuterVolumeSpecName: "kube-api-access-m7hzw") pod "32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb" (UID: "32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb"). InnerVolumeSpecName "kube-api-access-m7hzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.095473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerStarted","Data":"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.098454 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" event={"ID":"3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9","Type":"ContainerDied","Data":"084cb8926eebc5b7ea2ba63f00e07f2437ad708cdec2825f7b26e61a83b32552"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.098536 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zptc" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.102912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerStarted","Data":"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.109280 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm" (OuterVolumeSpecName: "kube-api-access-zl7tm") pod "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9" (UID: "3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9"). InnerVolumeSpecName "kube-api-access-zl7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.139138 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" event={"ID":"32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb","Type":"ContainerDied","Data":"61b37e51e84513d012750d6b5d6b552b7021b051b8310c07e0081077b8ad4308"} Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.139232 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xfvpb" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.193881 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7hzw\" (UniqueName: \"kubernetes.io/projected/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-kube-api-access-m7hzw\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.193911 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.193920 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.193930 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl7tm\" (UniqueName: \"kubernetes.io/projected/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-kube-api-access-zl7tm\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.193939 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.208274 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.213682 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xfvpb"] Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.421707 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb" path="/var/lib/kubelet/pods/32a9ad7a-2da6-4bcf-8bf8-c4de3ccdaabb/volumes" Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.455416 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.461604 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zptc"] Feb 14 10:59:34 crc kubenswrapper[4736]: I0214 10:59:34.474080 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 10:59:34 crc kubenswrapper[4736]: W0214 10:59:34.646678 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0705ea43_d70f_400e_ac09_07dbebf128ea.slice/crio-6931a2720265db54193a879733d2ac79f47c7ef71449fc906d4690bc58f94dd5 WatchSource:0}: Error finding container 6931a2720265db54193a879733d2ac79f47c7ef71449fc906d4690bc58f94dd5: Status 404 returned error can't find the container with id 6931a2720265db54193a879733d2ac79f47c7ef71449fc906d4690bc58f94dd5 Feb 14 10:59:35 crc kubenswrapper[4736]: I0214 10:59:35.163896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0705ea43-d70f-400e-ac09-07dbebf128ea","Type":"ContainerStarted","Data":"6931a2720265db54193a879733d2ac79f47c7ef71449fc906d4690bc58f94dd5"} Feb 14 10:59:35 crc kubenswrapper[4736]: I0214 10:59:35.437611 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 10:59:36 crc kubenswrapper[4736]: W0214 10:59:36.132927 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828f8add_3a9b_4ff0_82ef_ebb7c1b3dc18.slice/crio-7d51c02dbf7d5e6f86e6d3b3b1fb0c835939d56b66e9e0215761241eeed2f72a WatchSource:0}: Error finding container 7d51c02dbf7d5e6f86e6d3b3b1fb0c835939d56b66e9e0215761241eeed2f72a: Status 404 returned error can't find the container with id 7d51c02dbf7d5e6f86e6d3b3b1fb0c835939d56b66e9e0215761241eeed2f72a Feb 14 10:59:36 crc kubenswrapper[4736]: I0214 10:59:36.170947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18","Type":"ContainerStarted","Data":"7d51c02dbf7d5e6f86e6d3b3b1fb0c835939d56b66e9e0215761241eeed2f72a"} Feb 14 10:59:36 crc kubenswrapper[4736]: I0214 10:59:36.406593 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9" path="/var/lib/kubelet/pods/3765fe0e-e6fd-49a8-9cf1-dd02a29a99c9/volumes" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.227895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0a412d55-7134-4b50-b303-3174348c85fa","Type":"ContainerStarted","Data":"38618694994c90f9d6dd1718eabe7642406de24e8a13d113681b3903501cc798"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.228435 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.229586 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0705ea43-d70f-400e-ac09-07dbebf128ea","Type":"ContainerStarted","Data":"5026a2d11bb4704f18bff278d3929c4037083f5d4b53d6d77ceffd899a9f76db"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.231193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6dm75" event={"ID":"4dc5a707-dee1-457c-9100-e80b9eb96f6c","Type":"ContainerStarted","Data":"4ab5c1e16fb80d0cef60e0dfc77904a954295313d7485889879c67e10d2af629"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.234148 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f077df65-4b06-4908-87bb-d08572879c62","Type":"ContainerStarted","Data":"70aba22c1b9dde2b4132c8f20b6dbedd7ae2edc7aae208627560bed189b9202d"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.235631 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6871c749-b4c2-4a16-8322-aa4384a1b86b","Type":"ContainerStarted","Data":"6c62a90297343352e1252a91e5d5e40032f0917d1f8225eb84c6dd635b60a3d4"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.235769 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.237324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3a11355-8757-409d-b440-6b1a372ddd72","Type":"ContainerStarted","Data":"5a381c6ce07c86df6679ae92614a49a2a3ef3e23a2b371f522490c6667eec256"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.238979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j" event={"ID":"2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba","Type":"ContainerStarted","Data":"cc52a4912a699e5d14958016cf88c631310d2b9e8a9895bcefaf3231b54594f8"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.239069 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-msd5j" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.241120 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18","Type":"ContainerStarted","Data":"17d02b30c280947b8c2e10b60f6c8e2c656e420f2744ef83122c4d1b91e7b63a"} Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.263116 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.045915748 podStartE2EDuration="29.263100559s" podCreationTimestamp="2026-02-14 10:59:14 +0000 UTC" firstStartedPulling="2026-02-14 10:59:32.824811007 +0000 UTC m=+1083.193438375" lastFinishedPulling="2026-02-14 10:59:42.041995818 +0000 UTC m=+1092.410623186" observedRunningTime="2026-02-14 10:59:43.260663449 +0000 UTC m=+1093.629290827" watchObservedRunningTime="2026-02-14 10:59:43.263100559 +0000 UTC m=+1093.631727927" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.298582 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cxr2b"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.299982 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.303402 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.304255 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-msd5j" podStartSLOduration=14.652019503 podStartE2EDuration="24.304235119s" podCreationTimestamp="2026-02-14 10:59:19 +0000 UTC" firstStartedPulling="2026-02-14 10:59:32.982936884 +0000 UTC m=+1083.351564252" lastFinishedPulling="2026-02-14 10:59:42.6351525 +0000 UTC m=+1093.003779868" observedRunningTime="2026-02-14 10:59:43.298405702 +0000 UTC m=+1093.667033070" watchObservedRunningTime="2026-02-14 10:59:43.304235119 +0000 UTC m=+1093.672862487" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.313777 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cxr2b"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381618 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc96bf2-147a-454d-8443-20e850d25ad0-config\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381682 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-combined-ca-bundle\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381735 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvc55\" (UniqueName: \"kubernetes.io/projected/1cc96bf2-147a-454d-8443-20e850d25ad0-kube-api-access-qvc55\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovs-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381834 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.381967 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovn-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.430396 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.854512274 podStartE2EDuration="27.430375129s" podCreationTimestamp="2026-02-14 10:59:16 +0000 UTC" firstStartedPulling="2026-02-14 10:59:33.10930288 +0000 UTC m=+1083.477930248" lastFinishedPulling="2026-02-14 10:59:42.685165735 +0000 UTC m=+1093.053793103" observedRunningTime="2026-02-14 10:59:43.406139364 +0000 UTC m=+1093.774766732" watchObservedRunningTime="2026-02-14 10:59:43.430375129 +0000 UTC m=+1093.799002497" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483034 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc96bf2-147a-454d-8443-20e850d25ad0-config\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-combined-ca-bundle\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483127 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvc55\" (UniqueName: \"kubernetes.io/projected/1cc96bf2-147a-454d-8443-20e850d25ad0-kube-api-access-qvc55\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483154 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovs-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovn-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483705 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovs-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.483960 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cc96bf2-147a-454d-8443-20e850d25ad0-ovn-rundir\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.484917 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc96bf2-147a-454d-8443-20e850d25ad0-config\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.490756 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-combined-ca-bundle\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.517448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cc96bf2-147a-454d-8443-20e850d25ad0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.520459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvc55\" (UniqueName: \"kubernetes.io/projected/1cc96bf2-147a-454d-8443-20e850d25ad0-kube-api-access-qvc55\") pod \"ovn-controller-metrics-cxr2b\" (UID: \"1cc96bf2-147a-454d-8443-20e850d25ad0\") " pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.540687 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.616393 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.625141 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.631522 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.633191 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.648376 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cxr2b" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.686261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.686347 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxnf\" (UniqueName: \"kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.686376 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.686400 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.788179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.788241 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxnf\" (UniqueName: \"kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.788268 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.788293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.792588 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.792599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.793206 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.817379 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.821796 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxnf\" (UniqueName: \"kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf\") pod \"dnsmasq-dns-7fd796d7df-mhcnx\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.895963 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.903078 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.915677 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.946963 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 10:59:43 crc kubenswrapper[4736]: I0214 10:59:43.947412 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.006397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.006764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.006943 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.006982 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.007008 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthj5\" (UniqueName: \"kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.108166 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.108234 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.108262 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthj5\" (UniqueName: \"kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.108339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.108376 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.109394 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.109983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.110506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.110727 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.162618 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthj5\" (UniqueName: \"kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5\") pod \"dnsmasq-dns-86db49b7ff-gdqzj\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.277333 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.277531 4736 generic.go:334] "Generic (PLEG): container finished" podID="4dc5a707-dee1-457c-9100-e80b9eb96f6c" containerID="4ab5c1e16fb80d0cef60e0dfc77904a954295313d7485889879c67e10d2af629" exitCode=0 Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.277615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6dm75" event={"ID":"4dc5a707-dee1-457c-9100-e80b9eb96f6c","Type":"ContainerDied","Data":"4ab5c1e16fb80d0cef60e0dfc77904a954295313d7485889879c67e10d2af629"} Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.290100 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-547fn" event={"ID":"c7023bee-2454-47b8-b532-d02d6dbaf328","Type":"ContainerDied","Data":"3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6"} Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.290152 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3af93a62f2ab9c518dd1d844816a9494c1f391db2222201f9d67041b44d9ded6" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.310100 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.337509 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6cd\" (UniqueName: \"kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd\") pod \"c7023bee-2454-47b8-b532-d02d6dbaf328\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.337583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config\") pod \"c7023bee-2454-47b8-b532-d02d6dbaf328\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.337736 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc\") pod \"c7023bee-2454-47b8-b532-d02d6dbaf328\" (UID: \"c7023bee-2454-47b8-b532-d02d6dbaf328\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.349442 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd" (OuterVolumeSpecName: "kube-api-access-5g6cd") pod "c7023bee-2454-47b8-b532-d02d6dbaf328" (UID: "c7023bee-2454-47b8-b532-d02d6dbaf328"). InnerVolumeSpecName "kube-api-access-5g6cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.352469 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config" (OuterVolumeSpecName: "config") pod "c7023bee-2454-47b8-b532-d02d6dbaf328" (UID: "c7023bee-2454-47b8-b532-d02d6dbaf328"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.362099 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7023bee-2454-47b8-b532-d02d6dbaf328" (UID: "c7023bee-2454-47b8-b532-d02d6dbaf328"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.440355 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.440399 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6cd\" (UniqueName: \"kubernetes.io/projected/c7023bee-2454-47b8-b532-d02d6dbaf328-kube-api-access-5g6cd\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.440411 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7023bee-2454-47b8-b532-d02d6dbaf328-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.668417 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.770593 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config\") pod \"5725a7cf-6124-4c35-910a-9c57203687fc\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.770715 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnwjz\" (UniqueName: \"kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz\") pod \"5725a7cf-6124-4c35-910a-9c57203687fc\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.770794 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc\") pod \"5725a7cf-6124-4c35-910a-9c57203687fc\" (UID: \"5725a7cf-6124-4c35-910a-9c57203687fc\") " Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.772331 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config" (OuterVolumeSpecName: "config") pod "5725a7cf-6124-4c35-910a-9c57203687fc" (UID: "5725a7cf-6124-4c35-910a-9c57203687fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.772853 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5725a7cf-6124-4c35-910a-9c57203687fc" (UID: "5725a7cf-6124-4c35-910a-9c57203687fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.786643 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cxr2b"] Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.786892 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz" (OuterVolumeSpecName: "kube-api-access-gnwjz") pod "5725a7cf-6124-4c35-910a-9c57203687fc" (UID: "5725a7cf-6124-4c35-910a-9c57203687fc"). InnerVolumeSpecName "kube-api-access-gnwjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.807118 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.873385 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.873423 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnwjz\" (UniqueName: \"kubernetes.io/projected/5725a7cf-6124-4c35-910a-9c57203687fc-kube-api-access-gnwjz\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:44 crc kubenswrapper[4736]: I0214 10:59:44.873436 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5725a7cf-6124-4c35-910a-9c57203687fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.084834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.297902 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.297915 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dsl8v" event={"ID":"5725a7cf-6124-4c35-910a-9c57203687fc","Type":"ContainerDied","Data":"d147e988ebd96897b72db69c884adbb7090267a2d01a5af6b1790a04fdddb047"} Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.299223 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cxr2b" event={"ID":"1cc96bf2-147a-454d-8443-20e850d25ad0","Type":"ContainerStarted","Data":"f01b78d0eae0182d59355ab2d4f8018b2779fb8004ebcb7ddc87ef29f90ca6e6"} Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.301280 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" event={"ID":"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798","Type":"ContainerStarted","Data":"433f01fdf00518af20638ca06d6e1e4dfb6ecd9e45acb38b77e1a6b44ec0f0dd"} Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.303477 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-547fn" Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.303484 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6dm75" event={"ID":"4dc5a707-dee1-457c-9100-e80b9eb96f6c","Type":"ContainerStarted","Data":"615b495c47244df5c091aae1a3b87587a1371b6507dfa88a6e162563ee2787ec"} Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.336228 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.340426 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-547fn"] Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.380129 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:45 crc kubenswrapper[4736]: I0214 10:59:45.397185 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dsl8v"] Feb 14 10:59:45 crc kubenswrapper[4736]: W0214 10:59:45.691597 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4f9bb6_7bbf_4993_aa30_79f0f25d58a5.slice/crio-c41381d6bdaf31ae933449bae12ae698a0ceba60d12d10c737cfcbf8908ce388 WatchSource:0}: Error finding container c41381d6bdaf31ae933449bae12ae698a0ceba60d12d10c737cfcbf8908ce388: Status 404 returned error can't find the container with id c41381d6bdaf31ae933449bae12ae698a0ceba60d12d10c737cfcbf8908ce388 Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.315004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6dm75" event={"ID":"4dc5a707-dee1-457c-9100-e80b9eb96f6c","Type":"ContainerStarted","Data":"34e89aded5035a3e3b03efe9ff053cbb72590e6840e157e5eac2cb3f9ef1f85b"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.315552 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.315805 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.317329 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" event={"ID":"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5","Type":"ContainerStarted","Data":"c41381d6bdaf31ae933449bae12ae698a0ceba60d12d10c737cfcbf8908ce388"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.318613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cxr2b" event={"ID":"1cc96bf2-147a-454d-8443-20e850d25ad0","Type":"ContainerStarted","Data":"7ddbdc412ff8f931a31b5050239488b5429466737c842eedf71dc9295747d4c6"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.320421 4736 generic.go:334] "Generic (PLEG): container finished" podID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerID="745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9" exitCode=0 Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.320501 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" event={"ID":"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798","Type":"ContainerDied","Data":"745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.324335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18","Type":"ContainerStarted","Data":"4e3a0557966956779cef521c222da42356bebdd7afbe2a3262d4561a68af9a58"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.326079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0705ea43-d70f-400e-ac09-07dbebf128ea","Type":"ContainerStarted","Data":"a98c6807eb9bc8481900fe1333a7bbec80271715dd772faad78aedfd2eb8661d"} Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.348640 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6dm75" podStartSLOduration=18.71905966 podStartE2EDuration="27.348621889s" podCreationTimestamp="2026-02-14 10:59:19 +0000 UTC" firstStartedPulling="2026-02-14 10:59:33.89999228 +0000 UTC m=+1084.268619648" lastFinishedPulling="2026-02-14 10:59:42.529554509 +0000 UTC m=+1092.898181877" observedRunningTime="2026-02-14 10:59:46.346665173 +0000 UTC m=+1096.715292541" watchObservedRunningTime="2026-02-14 10:59:46.348621889 +0000 UTC m=+1096.717249257" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.399618 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cxr2b" podStartSLOduration=2.369910695 podStartE2EDuration="3.399596832s" podCreationTimestamp="2026-02-14 10:59:43 +0000 UTC" firstStartedPulling="2026-02-14 10:59:44.811676736 +0000 UTC m=+1095.180304104" lastFinishedPulling="2026-02-14 10:59:45.841362873 +0000 UTC m=+1096.209990241" observedRunningTime="2026-02-14 10:59:46.386957539 +0000 UTC m=+1096.755584907" watchObservedRunningTime="2026-02-14 10:59:46.399596832 +0000 UTC m=+1096.768224200" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.407857 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5725a7cf-6124-4c35-910a-9c57203687fc" path="/var/lib/kubelet/pods/5725a7cf-6124-4c35-910a-9c57203687fc/volumes" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.408462 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7023bee-2454-47b8-b532-d02d6dbaf328" path="/var/lib/kubelet/pods/c7023bee-2454-47b8-b532-d02d6dbaf328/volumes" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.436934 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.301482956 podStartE2EDuration="27.436910803s" podCreationTimestamp="2026-02-14 10:59:19 +0000 UTC" firstStartedPulling="2026-02-14 10:59:34.649488736 +0000 UTC m=+1085.018116104" lastFinishedPulling="2026-02-14 10:59:45.784916593 +0000 UTC m=+1096.153543951" observedRunningTime="2026-02-14 10:59:46.4124141 +0000 UTC m=+1096.781041488" watchObservedRunningTime="2026-02-14 10:59:46.436910803 +0000 UTC m=+1096.805538171" Feb 14 10:59:46 crc kubenswrapper[4736]: I0214 10:59:46.448893 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.81160516 podStartE2EDuration="24.448871906s" podCreationTimestamp="2026-02-14 10:59:22 +0000 UTC" firstStartedPulling="2026-02-14 10:59:36.135778827 +0000 UTC m=+1086.504406195" lastFinishedPulling="2026-02-14 10:59:45.773045573 +0000 UTC m=+1096.141672941" observedRunningTime="2026-02-14 10:59:46.441072362 +0000 UTC m=+1096.809699730" watchObservedRunningTime="2026-02-14 10:59:46.448871906 +0000 UTC m=+1096.817499284" Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.334315 4736 generic.go:334] "Generic (PLEG): container finished" podID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerID="64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17" exitCode=0 Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.334378 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" event={"ID":"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5","Type":"ContainerDied","Data":"64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17"} Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.336563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" event={"ID":"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798","Type":"ContainerStarted","Data":"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231"} Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.407473 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" podStartSLOduration=3.377833978 podStartE2EDuration="4.407456123s" podCreationTimestamp="2026-02-14 10:59:43 +0000 UTC" firstStartedPulling="2026-02-14 10:59:44.813173979 +0000 UTC m=+1095.181801347" lastFinishedPulling="2026-02-14 10:59:45.842796124 +0000 UTC m=+1096.211423492" observedRunningTime="2026-02-14 10:59:47.401612225 +0000 UTC m=+1097.770239593" watchObservedRunningTime="2026-02-14 10:59:47.407456123 +0000 UTC m=+1097.776083491" Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.693899 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:47 crc kubenswrapper[4736]: I0214 10:59:47.736093 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.098667 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.146249 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.348898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" event={"ID":"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5","Type":"ContainerStarted","Data":"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54"} Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.349038 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.351249 4736 generic.go:334] "Generic (PLEG): container finished" podID="e3a11355-8757-409d-b440-6b1a372ddd72" containerID="5a381c6ce07c86df6679ae92614a49a2a3ef3e23a2b371f522490c6667eec256" exitCode=0 Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.351298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3a11355-8757-409d-b440-6b1a372ddd72","Type":"ContainerDied","Data":"5a381c6ce07c86df6679ae92614a49a2a3ef3e23a2b371f522490c6667eec256"} Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.353428 4736 generic.go:334] "Generic (PLEG): container finished" podID="f077df65-4b06-4908-87bb-d08572879c62" containerID="70aba22c1b9dde2b4132c8f20b6dbedd7ae2edc7aae208627560bed189b9202d" exitCode=0 Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.353585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f077df65-4b06-4908-87bb-d08572879c62","Type":"ContainerDied","Data":"70aba22c1b9dde2b4132c8f20b6dbedd7ae2edc7aae208627560bed189b9202d"} Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.354287 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.354318 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.354333 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.428521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" podStartSLOduration=5.017299903 podStartE2EDuration="5.428499962s" podCreationTimestamp="2026-02-14 10:59:43 +0000 UTC" firstStartedPulling="2026-02-14 10:59:45.750017592 +0000 UTC m=+1096.118644960" lastFinishedPulling="2026-02-14 10:59:46.161217641 +0000 UTC m=+1096.529845019" observedRunningTime="2026-02-14 10:59:48.404912966 +0000 UTC m=+1098.773540334" watchObservedRunningTime="2026-02-14 10:59:48.428499962 +0000 UTC m=+1098.797127340" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.444624 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.454661 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.759524 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.760826 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.768117 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.768376 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.768483 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-t6d6d" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.774509 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.793312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859445 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d28847a2-993a-4124-b138-ecec67828807-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859502 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-scripts\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-config\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hlw\" (UniqueName: \"kubernetes.io/projected/d28847a2-993a-4124-b138-ecec67828807-kube-api-access-t7hlw\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.859635 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961422 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d28847a2-993a-4124-b138-ecec67828807-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961474 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-scripts\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961495 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961528 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-config\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hlw\" (UniqueName: \"kubernetes.io/projected/d28847a2-993a-4124-b138-ecec67828807-kube-api-access-t7hlw\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961590 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.961608 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.963116 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d28847a2-993a-4124-b138-ecec67828807-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.963266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-scripts\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.963275 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d28847a2-993a-4124-b138-ecec67828807-config\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.965150 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.966039 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.969318 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28847a2-993a-4124-b138-ecec67828807-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:48 crc kubenswrapper[4736]: I0214 10:59:48.979832 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hlw\" (UniqueName: \"kubernetes.io/projected/d28847a2-993a-4124-b138-ecec67828807-kube-api-access-t7hlw\") pod \"ovn-northd-0\" (UID: \"d28847a2-993a-4124-b138-ecec67828807\") " pod="openstack/ovn-northd-0" Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.103855 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.361108 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f077df65-4b06-4908-87bb-d08572879c62","Type":"ContainerStarted","Data":"4f71a85f32f7caa81b5d4ed39b4dd5e825dd84b79c80f1de7879308c9281a7d2"} Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.365620 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3a11355-8757-409d-b440-6b1a372ddd72","Type":"ContainerStarted","Data":"5917ba723508340de0aa8724fabcb94497ca31727fb6daa3e1e7c0e2d601d804"} Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.382502 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.908505226 podStartE2EDuration="38.382486007s" podCreationTimestamp="2026-02-14 10:59:11 +0000 UTC" firstStartedPulling="2026-02-14 10:59:33.05631537 +0000 UTC m=+1083.424942738" lastFinishedPulling="2026-02-14 10:59:42.530296151 +0000 UTC m=+1092.898923519" observedRunningTime="2026-02-14 10:59:49.380534491 +0000 UTC m=+1099.749161859" watchObservedRunningTime="2026-02-14 10:59:49.382486007 +0000 UTC m=+1099.751113375" Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.406927 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.823409994 podStartE2EDuration="37.406907308s" podCreationTimestamp="2026-02-14 10:59:12 +0000 UTC" firstStartedPulling="2026-02-14 10:59:33.056372861 +0000 UTC m=+1083.425000229" lastFinishedPulling="2026-02-14 10:59:42.639870175 +0000 UTC m=+1093.008497543" observedRunningTime="2026-02-14 10:59:49.399815124 +0000 UTC m=+1099.768442512" watchObservedRunningTime="2026-02-14 10:59:49.406907308 +0000 UTC m=+1099.775534686" Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.563510 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 10:59:49 crc kubenswrapper[4736]: W0214 10:59:49.571351 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28847a2_993a_4124_b138_ecec67828807.slice/crio-a3987a2d46b8ec680ef7f88dfe871863a77f6310eb8bbd41e67f15d4a1730baf WatchSource:0}: Error finding container a3987a2d46b8ec680ef7f88dfe871863a77f6310eb8bbd41e67f15d4a1730baf: Status 404 returned error can't find the container with id a3987a2d46b8ec680ef7f88dfe871863a77f6310eb8bbd41e67f15d4a1730baf Feb 14 10:59:49 crc kubenswrapper[4736]: I0214 10:59:49.740917 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 14 10:59:50 crc kubenswrapper[4736]: I0214 10:59:50.371347 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d28847a2-993a-4124-b138-ecec67828807","Type":"ContainerStarted","Data":"a3987a2d46b8ec680ef7f88dfe871863a77f6310eb8bbd41e67f15d4a1730baf"} Feb 14 10:59:51 crc kubenswrapper[4736]: I0214 10:59:51.392036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d28847a2-993a-4124-b138-ecec67828807","Type":"ContainerStarted","Data":"a27a164ab95a47394fbda07010908f21c48f23e6bb668c5d0f6606d0ebc439eb"} Feb 14 10:59:51 crc kubenswrapper[4736]: I0214 10:59:51.392309 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d28847a2-993a-4124-b138-ecec67828807","Type":"ContainerStarted","Data":"cbe2175c7c83b1a2799e14c1deb9f66cd7e7df905e3eb60cd3c28bf8bff61c18"} Feb 14 10:59:51 crc kubenswrapper[4736]: I0214 10:59:51.392330 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 14 10:59:51 crc kubenswrapper[4736]: I0214 10:59:51.410774 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.407250163 podStartE2EDuration="3.410736629s" podCreationTimestamp="2026-02-14 10:59:48 +0000 UTC" firstStartedPulling="2026-02-14 10:59:49.573158178 +0000 UTC m=+1099.941785546" lastFinishedPulling="2026-02-14 10:59:50.576644644 +0000 UTC m=+1100.945272012" observedRunningTime="2026-02-14 10:59:51.409918256 +0000 UTC m=+1101.778545644" watchObservedRunningTime="2026-02-14 10:59:51.410736629 +0000 UTC m=+1101.779364007" Feb 14 10:59:53 crc kubenswrapper[4736]: I0214 10:59:53.037692 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 10:59:53 crc kubenswrapper[4736]: I0214 10:59:53.038396 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 10:59:53 crc kubenswrapper[4736]: I0214 10:59:53.949944 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.113469 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.194154 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.278886 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.329323 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.410518 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.410556 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.412656 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="dnsmasq-dns" containerID="cri-o://e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231" gracePeriod=10 Feb 14 10:59:54 crc kubenswrapper[4736]: I0214 10:59:54.502353 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.404900 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.423955 4736 generic.go:334] "Generic (PLEG): container finished" podID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerID="e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231" exitCode=0 Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.423990 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.424036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" event={"ID":"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798","Type":"ContainerDied","Data":"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231"} Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.424095 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-mhcnx" event={"ID":"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798","Type":"ContainerDied","Data":"433f01fdf00518af20638ca06d6e1e4dfb6ecd9e45acb38b77e1a6b44ec0f0dd"} Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.424120 4736 scope.go:117] "RemoveContainer" containerID="e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.465046 4736 scope.go:117] "RemoveContainer" containerID="745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.490691 4736 scope.go:117] "RemoveContainer" containerID="e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231" Feb 14 10:59:55 crc kubenswrapper[4736]: E0214 10:59:55.491148 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231\": container with ID starting with e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231 not found: ID does not exist" containerID="e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.491176 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231"} err="failed to get container status \"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231\": rpc error: code = NotFound desc = could not find container \"e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231\": container with ID starting with e8d9d1d53441294089c7f7f816500d4fdeaddc1de167c27d7bd215da1a686231 not found: ID does not exist" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.491198 4736 scope.go:117] "RemoveContainer" containerID="745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9" Feb 14 10:59:55 crc kubenswrapper[4736]: E0214 10:59:55.491364 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9\": container with ID starting with 745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9 not found: ID does not exist" containerID="745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.491380 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9"} err="failed to get container status \"745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9\": rpc error: code = NotFound desc = could not find container \"745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9\": container with ID starting with 745f75634fe0b4bf329d92f90dd5912b3233bbab53baf9997aaf15830443b6a9 not found: ID does not exist" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.532633 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.567405 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb\") pod \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.567492 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc\") pod \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.567824 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config\") pod \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.567861 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxnf\" (UniqueName: \"kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf\") pod \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\" (UID: \"4d49e5a2-5d93-4ddd-a58d-a5e7adce0798\") " Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.583823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf" (OuterVolumeSpecName: "kube-api-access-vdxnf") pod "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" (UID: "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798"). InnerVolumeSpecName "kube-api-access-vdxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.627195 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" (UID: "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.635313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" (UID: "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.658779 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config" (OuterVolumeSpecName: "config") pod "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" (UID: "4d49e5a2-5d93-4ddd-a58d-a5e7adce0798"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.670023 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-config\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.670052 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxnf\" (UniqueName: \"kubernetes.io/projected/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-kube-api-access-vdxnf\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.670078 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.670087 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.756872 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.761929 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-mhcnx"] Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.980592 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1952-account-create-update-sg565"] Feb 14 10:59:55 crc kubenswrapper[4736]: E0214 10:59:55.981320 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="init" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.981340 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="init" Feb 14 10:59:55 crc kubenswrapper[4736]: E0214 10:59:55.981368 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="dnsmasq-dns" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.981377 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="dnsmasq-dns" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.981567 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" containerName="dnsmasq-dns" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.982147 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.984494 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 14 10:59:55 crc kubenswrapper[4736]: I0214 10:59:55.999257 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-r79mn"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.000547 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.005997 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1952-account-create-update-sg565"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.015911 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r79mn"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.076274 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.076320 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.076372 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9dp\" (UniqueName: \"kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.076632 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5x7\" (UniqueName: \"kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.081563 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bpqsj"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.082404 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.091842 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bpqsj"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178142 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178227 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178264 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9dp\" (UniqueName: \"kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178354 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178414 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5x7\" (UniqueName: \"kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.178459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmwf\" (UniqueName: \"kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.179517 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.180410 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.196809 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-877f-account-create-update-pwlkx"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.197907 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.202690 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.207071 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9dp\" (UniqueName: \"kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp\") pod \"keystone-db-create-r79mn\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.209509 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-877f-account-create-update-pwlkx"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.225717 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5x7\" (UniqueName: \"kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7\") pod \"placement-1952-account-create-update-sg565\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.279844 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmwf\" (UniqueName: \"kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.279977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75tn7\" (UniqueName: \"kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.280068 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.280205 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.281289 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.296305 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1952-account-create-update-sg565" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.299528 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmwf\" (UniqueName: \"kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf\") pod \"placement-db-create-bpqsj\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.314020 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r79mn" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.382076 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.382237 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75tn7\" (UniqueName: \"kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.385058 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.396410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bpqsj" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.398937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75tn7\" (UniqueName: \"kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7\") pod \"keystone-877f-account-create-update-pwlkx\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.410440 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d49e5a2-5d93-4ddd-a58d-a5e7adce0798" path="/var/lib/kubelet/pods/4d49e5a2-5d93-4ddd-a58d-a5e7adce0798/volumes" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.553410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.842628 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r79mn"] Feb 14 10:59:56 crc kubenswrapper[4736]: I0214 10:59:56.878068 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1952-account-create-update-sg565"] Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.042078 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.059200 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bpqsj"] Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.086834 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.089091 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.211773 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.242025 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-877f-account-create-update-pwlkx"] Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.242892 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.242956 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.243042 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.243106 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tcsw\" (UniqueName: \"kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.243167 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.345240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.345317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tcsw\" (UniqueName: \"kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.345382 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.345439 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.345470 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.348107 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.349213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.350179 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.350202 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.383648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tcsw\" (UniqueName: \"kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw\") pod \"dnsmasq-dns-698758b865-gtvzw\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.444027 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.453362 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-877f-account-create-update-pwlkx" event={"ID":"d6866fe7-50cc-40b2-8326-ac36ca31eb25","Type":"ContainerStarted","Data":"2766bee760ae88a743c61209c0811e27fce3d1f54cf06f3467a4a67fb317f434"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.454965 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1952-account-create-update-sg565" event={"ID":"efe7622f-339d-408e-a8bc-b83f3fd55653","Type":"ContainerStarted","Data":"5f837766c253c9f6880a425773eb11a84f8e8778ee562d7e0f8199c1049beca5"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.455003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1952-account-create-update-sg565" event={"ID":"efe7622f-339d-408e-a8bc-b83f3fd55653","Type":"ContainerStarted","Data":"62a0eeae8cf9530b57b434493a5a07682e6cd1f8a052892e83ec15f9ba6d41a8"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.461900 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r79mn" event={"ID":"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e","Type":"ContainerStarted","Data":"5e0222a9ba3cc626c5c25924ade6994b2834367ff8e4e98e96c3f7e4492bbd72"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.461983 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r79mn" event={"ID":"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e","Type":"ContainerStarted","Data":"f321f498d2f080dc9c5954fea7f9d83737ee5c9b432fc23ede80f97378ebb0de"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.467316 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bpqsj" event={"ID":"bdae2a62-d876-4c93-b5a3-ae8bcb002f08","Type":"ContainerStarted","Data":"dd3bbdf7f677003cfe5547454b84e96f3fe4a9594819b64779b309606c88f16b"} Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.517459 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-r79mn" podStartSLOduration=2.5174343710000002 podStartE2EDuration="2.517434371s" podCreationTimestamp="2026-02-14 10:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:59:57.512215061 +0000 UTC m=+1107.880842439" watchObservedRunningTime="2026-02-14 10:59:57.517434371 +0000 UTC m=+1107.886061749" Feb 14 10:59:57 crc kubenswrapper[4736]: I0214 10:59:57.518064 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1952-account-create-update-sg565" podStartSLOduration=2.518056689 podStartE2EDuration="2.518056689s" podCreationTimestamp="2026-02-14 10:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:59:57.48505957 +0000 UTC m=+1107.853686938" watchObservedRunningTime="2026-02-14 10:59:57.518056689 +0000 UTC m=+1107.886684057" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.063535 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 10:59:58 crc kubenswrapper[4736]: W0214 10:59:58.075132 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9d4ed58_4f61_4c47_acf8_09837e068c27.slice/crio-aa93ebd80d832df1c42fb9a9908a34b4c78ff4d44d899d7c84531bef30a0878f WatchSource:0}: Error finding container aa93ebd80d832df1c42fb9a9908a34b4c78ff4d44d899d7c84531bef30a0878f: Status 404 returned error can't find the container with id aa93ebd80d832df1c42fb9a9908a34b4c78ff4d44d899d7c84531bef30a0878f Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.263087 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.272814 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.274588 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.274589 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.275561 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.276823 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xnkxk" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.283875 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.372571 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq86s\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-kube-api-access-jq86s\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.373249 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283d5c8-4795-458e-8faf-c4908c75e01e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.373292 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-cache\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.373631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.373658 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-lock\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.373680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.474935 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.474980 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-lock\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475011 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475072 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq86s\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-kube-api-access-jq86s\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475107 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283d5c8-4795-458e-8faf-c4908c75e01e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-cache\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475286 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.475288 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.475318 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.475370 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 10:59:58.975352891 +0000 UTC m=+1109.343980259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475626 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-cache\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.475732 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0283d5c8-4795-458e-8faf-c4908c75e01e-lock\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.479115 4736 generic.go:334] "Generic (PLEG): container finished" podID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerID="2c033dacd717e88056c5d5f7323364d7d76b6bf9f9812b6121565976e2d91b31" exitCode=0 Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.479196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gtvzw" event={"ID":"f9d4ed58-4f61-4c47-acf8-09837e068c27","Type":"ContainerDied","Data":"2c033dacd717e88056c5d5f7323364d7d76b6bf9f9812b6121565976e2d91b31"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.479235 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gtvzw" event={"ID":"f9d4ed58-4f61-4c47-acf8-09837e068c27","Type":"ContainerStarted","Data":"aa93ebd80d832df1c42fb9a9908a34b4c78ff4d44d899d7c84531bef30a0878f"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.484312 4736 generic.go:334] "Generic (PLEG): container finished" podID="d6866fe7-50cc-40b2-8326-ac36ca31eb25" containerID="ccfcde2abb7be534a534ee9e8717234afe40132469e27ec578267eeb3b7c8af9" exitCode=0 Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.484375 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-877f-account-create-update-pwlkx" event={"ID":"d6866fe7-50cc-40b2-8326-ac36ca31eb25","Type":"ContainerDied","Data":"ccfcde2abb7be534a534ee9e8717234afe40132469e27ec578267eeb3b7c8af9"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.485547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0283d5c8-4795-458e-8faf-c4908c75e01e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.494682 4736 generic.go:334] "Generic (PLEG): container finished" podID="efe7622f-339d-408e-a8bc-b83f3fd55653" containerID="5f837766c253c9f6880a425773eb11a84f8e8778ee562d7e0f8199c1049beca5" exitCode=0 Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.494883 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1952-account-create-update-sg565" event={"ID":"efe7622f-339d-408e-a8bc-b83f3fd55653","Type":"ContainerDied","Data":"5f837766c253c9f6880a425773eb11a84f8e8778ee562d7e0f8199c1049beca5"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.503041 4736 generic.go:334] "Generic (PLEG): container finished" podID="df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" containerID="5e0222a9ba3cc626c5c25924ade6994b2834367ff8e4e98e96c3f7e4492bbd72" exitCode=0 Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.503204 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r79mn" event={"ID":"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e","Type":"ContainerDied","Data":"5e0222a9ba3cc626c5c25924ade6994b2834367ff8e4e98e96c3f7e4492bbd72"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.516620 4736 generic.go:334] "Generic (PLEG): container finished" podID="bdae2a62-d876-4c93-b5a3-ae8bcb002f08" containerID="0c1a2d141d446feea0c780f97d104f6026543409118704e95df841f8f025fad5" exitCode=0 Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.516876 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bpqsj" event={"ID":"bdae2a62-d876-4c93-b5a3-ae8bcb002f08","Type":"ContainerDied","Data":"0c1a2d141d446feea0c780f97d104f6026543409118704e95df841f8f025fad5"} Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.516927 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq86s\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-kube-api-access-jq86s\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.522319 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.775647 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9xgln"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.777073 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.779571 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.779900 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.779978 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.819291 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9xgln"] Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.819992 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-qnwrc ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-9xgln" podUID="dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.844007 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ccs82"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.849143 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.855499 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9xgln"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.869504 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ccs82"] Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.881440 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.881649 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.881880 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.881965 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.882025 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwrc\" (UniqueName: \"kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.882090 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.882117 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983704 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983788 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983823 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983862 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983889 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983915 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983948 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.983970 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984033 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tdm\" (UniqueName: \"kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984068 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984107 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984131 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984233 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.984259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnwrc\" (UniqueName: \"kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.984544 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.984570 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: E0214 10:59:58.984621 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 10:59:59.984601932 +0000 UTC m=+1110.353229340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.985576 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.986860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.991082 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.995004 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.998960 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:58 crc kubenswrapper[4736]: I0214 10:59:58.999838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.007516 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnwrc\" (UniqueName: \"kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc\") pod \"swift-ring-rebalance-9xgln\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.085855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.085937 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tdm\" (UniqueName: \"kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.085997 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.086037 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.086065 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.086106 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.086132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.087074 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.087192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.087306 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.093939 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.096077 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.096328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.105000 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tdm\" (UniqueName: \"kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm\") pod \"swift-ring-rebalance-ccs82\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.178967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccs82" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.462678 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ccs82"] Feb 14 10:59:59 crc kubenswrapper[4736]: W0214 10:59:59.467463 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d4bbe4_e30d_4906_ac6e_c4da9f6faf9a.slice/crio-58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a WatchSource:0}: Error finding container 58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a: Status 404 returned error can't find the container with id 58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.528056 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gtvzw" event={"ID":"f9d4ed58-4f61-4c47-acf8-09837e068c27","Type":"ContainerStarted","Data":"84e9d2785827d8ed1bfee59f606255436323d45e3c5f87729da4a707985fb9fc"} Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.528361 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.530233 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ccs82" event={"ID":"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a","Type":"ContainerStarted","Data":"58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a"} Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.530335 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.543888 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9xgln" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696605 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696728 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696788 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnwrc\" (UniqueName: \"kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696817 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696860 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.696881 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf\") pod \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\" (UID: \"dec8ba71-ce94-43d2-8dc3-33aba8e3c08e\") " Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.698257 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts" (OuterVolumeSpecName: "scripts") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.698406 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.698462 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.701923 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.702792 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.703379 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.723975 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc" (OuterVolumeSpecName: "kube-api-access-qnwrc") pod "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" (UID: "dec8ba71-ce94-43d2-8dc3-33aba8e3c08e"). InnerVolumeSpecName "kube-api-access-qnwrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799038 4736 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799074 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799088 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnwrc\" (UniqueName: \"kubernetes.io/projected/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-kube-api-access-qnwrc\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799101 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799113 4736 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799125 4736 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.799137 4736 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.872474 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podStartSLOduration=2.872456765 podStartE2EDuration="2.872456765s" podCreationTimestamp="2026-02-14 10:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 10:59:59.552184201 +0000 UTC m=+1109.920811569" watchObservedRunningTime="2026-02-14 10:59:59.872456765 +0000 UTC m=+1110.241084133" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.878161 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4n8xn"] Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.879400 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4n8xn" Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.899244 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4n8xn"] Feb 14 10:59:59 crc kubenswrapper[4736]: I0214 10:59:59.990455 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.001169 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4h99\" (UniqueName: \"kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.001460 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.001508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: E0214 11:00:00.001630 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 11:00:00 crc kubenswrapper[4736]: E0214 11:00:00.001642 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 11:00:00 crc kubenswrapper[4736]: E0214 11:00:00.001675 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 11:00:02.001662742 +0000 UTC m=+1112.370290110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.008006 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9a59-account-create-update-hxb95"] Feb 14 11:00:00 crc kubenswrapper[4736]: E0214 11:00:00.008403 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6866fe7-50cc-40b2-8326-ac36ca31eb25" containerName="mariadb-account-create-update" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.008419 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6866fe7-50cc-40b2-8326-ac36ca31eb25" containerName="mariadb-account-create-update" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.008639 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6866fe7-50cc-40b2-8326-ac36ca31eb25" containerName="mariadb-account-create-update" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.014836 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.028203 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.043441 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9a59-account-create-update-hxb95"] Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.109546 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts\") pod \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.109937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75tn7\" (UniqueName: \"kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7\") pod \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\" (UID: \"d6866fe7-50cc-40b2-8326-ac36ca31eb25\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.110166 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4h99\" (UniqueName: \"kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.110224 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnkr\" (UniqueName: \"kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.110403 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.110485 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.110594 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6866fe7-50cc-40b2-8326-ac36ca31eb25" (UID: "d6866fe7-50cc-40b2-8326-ac36ca31eb25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.113691 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.123464 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7" (OuterVolumeSpecName: "kube-api-access-75tn7") pod "d6866fe7-50cc-40b2-8326-ac36ca31eb25" (UID: "d6866fe7-50cc-40b2-8326-ac36ca31eb25"). InnerVolumeSpecName "kube-api-access-75tn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.157287 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4h99\" (UniqueName: \"kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99\") pod \"glance-db-create-4n8xn\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.192679 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7"] Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.193884 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.197648 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.197774 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.211766 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7"] Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.212731 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.212967 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnkr\" (UniqueName: \"kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.213098 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75tn7\" (UniqueName: \"kubernetes.io/projected/d6866fe7-50cc-40b2-8326-ac36ca31eb25-kube-api-access-75tn7\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.213114 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6866fe7-50cc-40b2-8326-ac36ca31eb25-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.226199 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.250498 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1952-account-create-update-sg565" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.255758 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnkr\" (UniqueName: \"kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr\") pod \"glance-9a59-account-create-update-hxb95\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.279437 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r79mn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.283244 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bpqsj" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.287832 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.316711 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5x7\" (UniqueName: \"kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7\") pod \"efe7622f-339d-408e-a8bc-b83f3fd55653\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.316842 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts\") pod \"efe7622f-339d-408e-a8bc-b83f3fd55653\" (UID: \"efe7622f-339d-408e-a8bc-b83f3fd55653\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.317131 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.317154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.317210 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gnx\" (UniqueName: \"kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.317534 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efe7622f-339d-408e-a8bc-b83f3fd55653" (UID: "efe7622f-339d-408e-a8bc-b83f3fd55653"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.328295 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7" (OuterVolumeSpecName: "kube-api-access-cx5x7") pod "efe7622f-339d-408e-a8bc-b83f3fd55653" (UID: "efe7622f-339d-408e-a8bc-b83f3fd55653"). InnerVolumeSpecName "kube-api-access-cx5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.357018 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421034 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts\") pod \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421105 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts\") pod \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421239 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmwf\" (UniqueName: \"kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf\") pod \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\" (UID: \"bdae2a62-d876-4c93-b5a3-ae8bcb002f08\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421295 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s9dp\" (UniqueName: \"kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp\") pod \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\" (UID: \"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e\") " Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421576 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gnx\" (UniqueName: \"kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421706 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421733 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421827 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe7622f-339d-408e-a8bc-b83f3fd55653-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.421843 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5x7\" (UniqueName: \"kubernetes.io/projected/efe7622f-339d-408e-a8bc-b83f3fd55653-kube-api-access-cx5x7\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.425171 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.427716 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" (UID: "df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.428114 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdae2a62-d876-4c93-b5a3-ae8bcb002f08" (UID: "bdae2a62-d876-4c93-b5a3-ae8bcb002f08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.452935 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf" (OuterVolumeSpecName: "kube-api-access-pdmwf") pod "bdae2a62-d876-4c93-b5a3-ae8bcb002f08" (UID: "bdae2a62-d876-4c93-b5a3-ae8bcb002f08"). InnerVolumeSpecName "kube-api-access-pdmwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.453003 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp" (OuterVolumeSpecName: "kube-api-access-4s9dp") pod "df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" (UID: "df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e"). InnerVolumeSpecName "kube-api-access-4s9dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.460597 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.471611 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gnx\" (UniqueName: \"kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx\") pod \"collect-profiles-29517780-b6ql7\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.522770 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmwf\" (UniqueName: \"kubernetes.io/projected/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-kube-api-access-pdmwf\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.522789 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s9dp\" (UniqueName: \"kubernetes.io/projected/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-kube-api-access-4s9dp\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.522799 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.522809 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdae2a62-d876-4c93-b5a3-ae8bcb002f08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.540120 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.589531 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1952-account-create-update-sg565" event={"ID":"efe7622f-339d-408e-a8bc-b83f3fd55653","Type":"ContainerDied","Data":"62a0eeae8cf9530b57b434493a5a07682e6cd1f8a052892e83ec15f9ba6d41a8"} Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.589570 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a0eeae8cf9530b57b434493a5a07682e6cd1f8a052892e83ec15f9ba6d41a8" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.589641 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1952-account-create-update-sg565" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.593134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r79mn" event={"ID":"df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e","Type":"ContainerDied","Data":"f321f498d2f080dc9c5954fea7f9d83737ee5c9b432fc23ede80f97378ebb0de"} Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.593185 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f321f498d2f080dc9c5954fea7f9d83737ee5c9b432fc23ede80f97378ebb0de" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.593265 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r79mn" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.595459 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bpqsj" event={"ID":"bdae2a62-d876-4c93-b5a3-ae8bcb002f08","Type":"ContainerDied","Data":"dd3bbdf7f677003cfe5547454b84e96f3fe4a9594819b64779b309606c88f16b"} Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.595484 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd3bbdf7f677003cfe5547454b84e96f3fe4a9594819b64779b309606c88f16b" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.595535 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bpqsj" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.597271 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9xgln" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.598172 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-877f-account-create-update-pwlkx" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.598818 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-877f-account-create-update-pwlkx" event={"ID":"d6866fe7-50cc-40b2-8326-ac36ca31eb25","Type":"ContainerDied","Data":"2766bee760ae88a743c61209c0811e27fce3d1f54cf06f3467a4a67fb317f434"} Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.599138 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2766bee760ae88a743c61209c0811e27fce3d1f54cf06f3467a4a67fb317f434" Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.736075 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9xgln"] Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.744500 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-9xgln"] Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.950611 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7"] Feb 14 11:00:00 crc kubenswrapper[4736]: W0214 11:00:00.959239 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0205e213_3253_4b14_b645_18a0dfdfe4d3.slice/crio-3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882 WatchSource:0}: Error finding container 3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882: Status 404 returned error can't find the container with id 3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882 Feb 14 11:00:00 crc kubenswrapper[4736]: I0214 11:00:00.960142 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9a59-account-create-update-hxb95"] Feb 14 11:00:00 crc kubenswrapper[4736]: W0214 11:00:00.961006 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bebc481_1b21_4a52_9d7e_f2683269c0a5.slice/crio-e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf WatchSource:0}: Error finding container e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf: Status 404 returned error can't find the container with id e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.066003 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4n8xn"] Feb 14 11:00:01 crc kubenswrapper[4736]: W0214 11:00:01.082358 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35210bf4_ee1c_4534_ab21_c04b78c3eb1e.slice/crio-dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335 WatchSource:0}: Error finding container dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335: Status 404 returned error can't find the container with id dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335 Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.612887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" event={"ID":"0205e213-3253-4b14-b645-18a0dfdfe4d3","Type":"ContainerStarted","Data":"b9364a798d80b5477cfb1e4cb8b4556328faffd007702fd9ef3ce2ddf8ee8d5b"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.613123 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" event={"ID":"0205e213-3253-4b14-b645-18a0dfdfe4d3","Type":"ContainerStarted","Data":"3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.624046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4n8xn" event={"ID":"35210bf4-ee1c-4534-ab21-c04b78c3eb1e","Type":"ContainerStarted","Data":"6b57c021b584071648ceb2516f10a821ce461aec3a2998959873e7c5bd309833"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.624089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4n8xn" event={"ID":"35210bf4-ee1c-4534-ab21-c04b78c3eb1e","Type":"ContainerStarted","Data":"dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.635885 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" podStartSLOduration=1.635869609 podStartE2EDuration="1.635869609s" podCreationTimestamp="2026-02-14 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:01.631269177 +0000 UTC m=+1111.999896545" watchObservedRunningTime="2026-02-14 11:00:01.635869609 +0000 UTC m=+1112.004496977" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.638172 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a59-account-create-update-hxb95" event={"ID":"3bebc481-1b21-4a52-9d7e-f2683269c0a5","Type":"ContainerStarted","Data":"dd6f228a2bc49ec01badf7acecb9bd4430b96ceb0f18f2349dafa69cb16c93ef"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.638249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a59-account-create-update-hxb95" event={"ID":"3bebc481-1b21-4a52-9d7e-f2683269c0a5","Type":"ContainerStarted","Data":"e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf"} Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.646789 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-4n8xn" podStartSLOduration=2.6467653220000003 podStartE2EDuration="2.646765322s" podCreationTimestamp="2026-02-14 10:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:01.645218418 +0000 UTC m=+1112.013845786" watchObservedRunningTime="2026-02-14 11:00:01.646765322 +0000 UTC m=+1112.015392690" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.679419 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9a59-account-create-update-hxb95" podStartSLOduration=2.679394501 podStartE2EDuration="2.679394501s" podCreationTimestamp="2026-02-14 10:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:01.67414202 +0000 UTC m=+1112.042769388" watchObservedRunningTime="2026-02-14 11:00:01.679394501 +0000 UTC m=+1112.048021879" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.724866 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-59xwd"] Feb 14 11:00:01 crc kubenswrapper[4736]: E0214 11:00:01.725265 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdae2a62-d876-4c93-b5a3-ae8bcb002f08" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725282 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdae2a62-d876-4c93-b5a3-ae8bcb002f08" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: E0214 11:00:01.725301 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe7622f-339d-408e-a8bc-b83f3fd55653" containerName="mariadb-account-create-update" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725308 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe7622f-339d-408e-a8bc-b83f3fd55653" containerName="mariadb-account-create-update" Feb 14 11:00:01 crc kubenswrapper[4736]: E0214 11:00:01.725323 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725329 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725481 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725493 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdae2a62-d876-4c93-b5a3-ae8bcb002f08" containerName="mariadb-database-create" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.725504 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe7622f-339d-408e-a8bc-b83f3fd55653" containerName="mariadb-account-create-update" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.726049 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.728663 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.752283 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59xwd"] Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.858531 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.858909 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzqr\" (UniqueName: \"kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.960352 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzqr\" (UniqueName: \"kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.960519 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.961492 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:01 crc kubenswrapper[4736]: I0214 11:00:01.987122 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzqr\" (UniqueName: \"kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr\") pod \"root-account-create-update-59xwd\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.067607 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.068204 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:02 crc kubenswrapper[4736]: E0214 11:00:02.068359 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 11:00:02 crc kubenswrapper[4736]: E0214 11:00:02.068373 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 11:00:02 crc kubenswrapper[4736]: E0214 11:00:02.068414 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 11:00:06.068396193 +0000 UTC m=+1116.437023561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.406641 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec8ba71-ce94-43d2-8dc3-33aba8e3c08e" path="/var/lib/kubelet/pods/dec8ba71-ce94-43d2-8dc3-33aba8e3c08e/volumes" Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.649509 4736 generic.go:334] "Generic (PLEG): container finished" podID="3bebc481-1b21-4a52-9d7e-f2683269c0a5" containerID="dd6f228a2bc49ec01badf7acecb9bd4430b96ceb0f18f2349dafa69cb16c93ef" exitCode=0 Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.649566 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a59-account-create-update-hxb95" event={"ID":"3bebc481-1b21-4a52-9d7e-f2683269c0a5","Type":"ContainerDied","Data":"dd6f228a2bc49ec01badf7acecb9bd4430b96ceb0f18f2349dafa69cb16c93ef"} Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.652250 4736 generic.go:334] "Generic (PLEG): container finished" podID="0205e213-3253-4b14-b645-18a0dfdfe4d3" containerID="b9364a798d80b5477cfb1e4cb8b4556328faffd007702fd9ef3ce2ddf8ee8d5b" exitCode=0 Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.652317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" event={"ID":"0205e213-3253-4b14-b645-18a0dfdfe4d3","Type":"ContainerDied","Data":"b9364a798d80b5477cfb1e4cb8b4556328faffd007702fd9ef3ce2ddf8ee8d5b"} Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.654594 4736 generic.go:334] "Generic (PLEG): container finished" podID="35210bf4-ee1c-4534-ab21-c04b78c3eb1e" containerID="6b57c021b584071648ceb2516f10a821ce461aec3a2998959873e7c5bd309833" exitCode=0 Feb 14 11:00:02 crc kubenswrapper[4736]: I0214 11:00:02.654642 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4n8xn" event={"ID":"35210bf4-ee1c-4534-ab21-c04b78c3eb1e","Type":"ContainerDied","Data":"6b57c021b584071648ceb2516f10a821ce461aec3a2998959873e7c5bd309833"} Feb 14 11:00:04 crc kubenswrapper[4736]: I0214 11:00:04.956912 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:04 crc kubenswrapper[4736]: I0214 11:00:04.966803 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:04 crc kubenswrapper[4736]: I0214 11:00:04.979551 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.120046 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts\") pod \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.120100 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4h99\" (UniqueName: \"kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99\") pod \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.120129 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume\") pod \"0205e213-3253-4b14-b645-18a0dfdfe4d3\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.120169 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsnkr\" (UniqueName: \"kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr\") pod \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\" (UID: \"3bebc481-1b21-4a52-9d7e-f2683269c0a5\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121246 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gnx\" (UniqueName: \"kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx\") pod \"0205e213-3253-4b14-b645-18a0dfdfe4d3\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts\") pod \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\" (UID: \"35210bf4-ee1c-4534-ab21-c04b78c3eb1e\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121455 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume\") pod \"0205e213-3253-4b14-b645-18a0dfdfe4d3\" (UID: \"0205e213-3253-4b14-b645-18a0dfdfe4d3\") " Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121486 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bebc481-1b21-4a52-9d7e-f2683269c0a5" (UID: "3bebc481-1b21-4a52-9d7e-f2683269c0a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35210bf4-ee1c-4534-ab21-c04b78c3eb1e" (UID: "35210bf4-ee1c-4534-ab21-c04b78c3eb1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.121838 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bebc481-1b21-4a52-9d7e-f2683269c0a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.122270 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0205e213-3253-4b14-b645-18a0dfdfe4d3" (UID: "0205e213-3253-4b14-b645-18a0dfdfe4d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.125596 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99" (OuterVolumeSpecName: "kube-api-access-q4h99") pod "35210bf4-ee1c-4534-ab21-c04b78c3eb1e" (UID: "35210bf4-ee1c-4534-ab21-c04b78c3eb1e"). InnerVolumeSpecName "kube-api-access-q4h99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.126199 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr" (OuterVolumeSpecName: "kube-api-access-jsnkr") pod "3bebc481-1b21-4a52-9d7e-f2683269c0a5" (UID: "3bebc481-1b21-4a52-9d7e-f2683269c0a5"). InnerVolumeSpecName "kube-api-access-jsnkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.126286 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx" (OuterVolumeSpecName: "kube-api-access-t5gnx") pod "0205e213-3253-4b14-b645-18a0dfdfe4d3" (UID: "0205e213-3253-4b14-b645-18a0dfdfe4d3"). InnerVolumeSpecName "kube-api-access-t5gnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.129846 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0205e213-3253-4b14-b645-18a0dfdfe4d3" (UID: "0205e213-3253-4b14-b645-18a0dfdfe4d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223188 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4h99\" (UniqueName: \"kubernetes.io/projected/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-kube-api-access-q4h99\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223232 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0205e213-3253-4b14-b645-18a0dfdfe4d3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223245 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsnkr\" (UniqueName: \"kubernetes.io/projected/3bebc481-1b21-4a52-9d7e-f2683269c0a5-kube-api-access-jsnkr\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223257 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gnx\" (UniqueName: \"kubernetes.io/projected/0205e213-3253-4b14-b645-18a0dfdfe4d3-kube-api-access-t5gnx\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223269 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35210bf4-ee1c-4534-ab21-c04b78c3eb1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.223281 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0205e213-3253-4b14-b645-18a0dfdfe4d3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.306438 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59xwd"] Feb 14 11:00:05 crc kubenswrapper[4736]: E0214 11:00:05.505790 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb03a69_d572_4b83_97b9_13d33b501b6a.slice/crio-conmon-027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ab9b0c_bef8_4c48_9557_89ad8b9d864f.slice/crio-conmon-e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13.scope\": RecentStats: unable to find data in memory cache]" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.681107 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4n8xn" event={"ID":"35210bf4-ee1c-4534-ab21-c04b78c3eb1e","Type":"ContainerDied","Data":"dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335"} Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.682378 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc42d4d04c3c9c3b33a86d7eea3387a975e66b92c041ac19e7bf64d90815d335" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.681189 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4n8xn" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.684211 4736 generic.go:334] "Generic (PLEG): container finished" podID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerID="e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13" exitCode=0 Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.684274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerDied","Data":"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13"} Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.690133 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a59-account-create-update-hxb95" event={"ID":"3bebc481-1b21-4a52-9d7e-f2683269c0a5","Type":"ContainerDied","Data":"e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf"} Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.690278 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f95fc0b32e89a9569bcf8b5be7784fe2f8f3bb3af02f4d08f13b0cf85feaaf" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.690174 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a59-account-create-update-hxb95" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.692775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59xwd" event={"ID":"8f68a78b-efca-4aa8-b690-0def09a60418","Type":"ContainerStarted","Data":"e42c22954e652a45899b2ac77daf7c75a5c2c8a0f0d84693a60591c6480708fe"} Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.696145 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" event={"ID":"0205e213-3253-4b14-b645-18a0dfdfe4d3","Type":"ContainerDied","Data":"3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882"} Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.696184 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdecb91df7760b44d5d22c561705ab5eaed47a92f01498435dc5490ca350882" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.696282 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7" Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.713308 4736 generic.go:334] "Generic (PLEG): container finished" podID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerID="027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b" exitCode=0 Feb 14 11:00:05 crc kubenswrapper[4736]: I0214 11:00:05.713388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerDied","Data":"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b"} Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.140828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:06 crc kubenswrapper[4736]: E0214 11:00:06.141044 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 11:00:06 crc kubenswrapper[4736]: E0214 11:00:06.141195 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 11:00:06 crc kubenswrapper[4736]: E0214 11:00:06.141249 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 11:00:14.141231928 +0000 UTC m=+1124.509859296 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.724380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerStarted","Data":"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84"} Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.724627 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.725781 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59xwd" event={"ID":"8f68a78b-efca-4aa8-b690-0def09a60418","Type":"ContainerStarted","Data":"675ddc5bcfd6a91b90ecee27df0cae06fcbb985022d6aaf7ad814113f68efb37"} Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.727864 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerStarted","Data":"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb"} Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.728146 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.759326 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.517576572 podStartE2EDuration="57.759302909s" podCreationTimestamp="2026-02-14 10:59:09 +0000 UTC" firstStartedPulling="2026-02-14 10:59:12.013722651 +0000 UTC m=+1062.382350019" lastFinishedPulling="2026-02-14 10:59:32.255448988 +0000 UTC m=+1082.624076356" observedRunningTime="2026-02-14 11:00:06.749712543 +0000 UTC m=+1117.118339911" watchObservedRunningTime="2026-02-14 11:00:06.759302909 +0000 UTC m=+1117.127930277" Feb 14 11:00:06 crc kubenswrapper[4736]: I0214 11:00:06.814754 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.735193201 podStartE2EDuration="56.814723414s" podCreationTimestamp="2026-02-14 10:59:10 +0000 UTC" firstStartedPulling="2026-02-14 10:59:12.298683428 +0000 UTC m=+1062.667310796" lastFinishedPulling="2026-02-14 10:59:32.378213641 +0000 UTC m=+1082.746841009" observedRunningTime="2026-02-14 11:00:06.796341585 +0000 UTC m=+1117.164968953" watchObservedRunningTime="2026-02-14 11:00:06.814723414 +0000 UTC m=+1117.183350782" Feb 14 11:00:07 crc kubenswrapper[4736]: I0214 11:00:07.446080 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 11:00:07 crc kubenswrapper[4736]: I0214 11:00:07.466663 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-59xwd" podStartSLOduration=6.4666418199999995 podStartE2EDuration="6.46664182s" podCreationTimestamp="2026-02-14 11:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:06.818372969 +0000 UTC m=+1117.187000327" watchObservedRunningTime="2026-02-14 11:00:07.46664182 +0000 UTC m=+1117.835269198" Feb 14 11:00:07 crc kubenswrapper[4736]: I0214 11:00:07.512113 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 11:00:07 crc kubenswrapper[4736]: I0214 11:00:07.512596 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="dnsmasq-dns" containerID="cri-o://20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54" gracePeriod=10 Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.712847 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.763605 4736 generic.go:334] "Generic (PLEG): container finished" podID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerID="20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54" exitCode=0 Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.763702 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" event={"ID":"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5","Type":"ContainerDied","Data":"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54"} Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.763729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" event={"ID":"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5","Type":"ContainerDied","Data":"c41381d6bdaf31ae933449bae12ae698a0ceba60d12d10c737cfcbf8908ce388"} Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.763766 4736 scope.go:117] "RemoveContainer" containerID="20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.763936 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gdqzj" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.777171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ccs82" event={"ID":"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a","Type":"ContainerStarted","Data":"6c0970877ab3dd1b247b58c7301c4449c9d313c1c999600a2c43886a5107e2e0"} Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.778454 4736 generic.go:334] "Generic (PLEG): container finished" podID="8f68a78b-efca-4aa8-b690-0def09a60418" containerID="675ddc5bcfd6a91b90ecee27df0cae06fcbb985022d6aaf7ad814113f68efb37" exitCode=0 Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.778504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59xwd" event={"ID":"8f68a78b-efca-4aa8-b690-0def09a60418","Type":"ContainerDied","Data":"675ddc5bcfd6a91b90ecee27df0cae06fcbb985022d6aaf7ad814113f68efb37"} Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.783598 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb\") pod \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.783770 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthj5\" (UniqueName: \"kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5\") pod \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.783922 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb\") pod \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.783948 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config\") pod \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.783981 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc\") pod \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\" (UID: \"0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5\") " Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.792506 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5" (OuterVolumeSpecName: "kube-api-access-gthj5") pod "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" (UID: "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5"). InnerVolumeSpecName "kube-api-access-gthj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.799810 4736 scope.go:117] "RemoveContainer" containerID="64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.836663 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ccs82" podStartSLOduration=2.41519467 podStartE2EDuration="10.836642375s" podCreationTimestamp="2026-02-14 10:59:58 +0000 UTC" firstStartedPulling="2026-02-14 10:59:59.472228721 +0000 UTC m=+1109.840856089" lastFinishedPulling="2026-02-14 11:00:07.893676426 +0000 UTC m=+1118.262303794" observedRunningTime="2026-02-14 11:00:08.81284929 +0000 UTC m=+1119.181476658" watchObservedRunningTime="2026-02-14 11:00:08.836642375 +0000 UTC m=+1119.205269743" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.853947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" (UID: "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.860388 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" (UID: "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.879028 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config" (OuterVolumeSpecName: "config") pod "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" (UID: "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.885772 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.885795 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthj5\" (UniqueName: \"kubernetes.io/projected/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-kube-api-access-gthj5\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.885805 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.885813 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.906819 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" (UID: "0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.919254 4736 scope.go:117] "RemoveContainer" containerID="20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54" Feb 14 11:00:08 crc kubenswrapper[4736]: E0214 11:00:08.919646 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54\": container with ID starting with 20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54 not found: ID does not exist" containerID="20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.919712 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54"} err="failed to get container status \"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54\": rpc error: code = NotFound desc = could not find container \"20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54\": container with ID starting with 20403d592d4b3e954e9bde483d03736d0d673e6a2b696187b50ce7a7ccc7ea54 not found: ID does not exist" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.919840 4736 scope.go:117] "RemoveContainer" containerID="64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17" Feb 14 11:00:08 crc kubenswrapper[4736]: E0214 11:00:08.920239 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17\": container with ID starting with 64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17 not found: ID does not exist" containerID="64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.920268 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17"} err="failed to get container status \"64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17\": rpc error: code = NotFound desc = could not find container \"64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17\": container with ID starting with 64049466f3eaeac13cd7fb3f834ff22fd7e0cf8a964b0e10154bc2a038700e17 not found: ID does not exist" Feb 14 11:00:08 crc kubenswrapper[4736]: I0214 11:00:08.986996 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:09 crc kubenswrapper[4736]: I0214 11:00:09.101557 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 11:00:09 crc kubenswrapper[4736]: I0214 11:00:09.116927 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gdqzj"] Feb 14 11:00:09 crc kubenswrapper[4736]: I0214 11:00:09.194378 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.156901 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.210373 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts\") pod \"8f68a78b-efca-4aa8-b690-0def09a60418\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.210493 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhzqr\" (UniqueName: \"kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr\") pod \"8f68a78b-efca-4aa8-b690-0def09a60418\" (UID: \"8f68a78b-efca-4aa8-b690-0def09a60418\") " Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.210798 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f68a78b-efca-4aa8-b690-0def09a60418" (UID: "8f68a78b-efca-4aa8-b690-0def09a60418"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.210908 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f68a78b-efca-4aa8-b690-0def09a60418-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.214651 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kbq8d"] Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.214980 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f68a78b-efca-4aa8-b690-0def09a60418" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215155 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f68a78b-efca-4aa8-b690-0def09a60418" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.215170 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="dnsmasq-dns" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215177 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="dnsmasq-dns" Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.215189 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35210bf4-ee1c-4534-ab21-c04b78c3eb1e" containerName="mariadb-database-create" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215195 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="35210bf4-ee1c-4534-ab21-c04b78c3eb1e" containerName="mariadb-database-create" Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.215208 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="init" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215215 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="init" Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.215234 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bebc481-1b21-4a52-9d7e-f2683269c0a5" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215239 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bebc481-1b21-4a52-9d7e-f2683269c0a5" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: E0214 11:00:10.215252 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0205e213-3253-4b14-b645-18a0dfdfe4d3" containerName="collect-profiles" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215257 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0205e213-3253-4b14-b645-18a0dfdfe4d3" containerName="collect-profiles" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215407 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="35210bf4-ee1c-4534-ab21-c04b78c3eb1e" containerName="mariadb-database-create" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215417 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bebc481-1b21-4a52-9d7e-f2683269c0a5" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215431 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" containerName="dnsmasq-dns" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215439 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f68a78b-efca-4aa8-b690-0def09a60418" containerName="mariadb-account-create-update" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.215454 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0205e213-3253-4b14-b645-18a0dfdfe4d3" containerName="collect-profiles" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.216046 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.217427 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr" (OuterVolumeSpecName: "kube-api-access-jhzqr") pod "8f68a78b-efca-4aa8-b690-0def09a60418" (UID: "8f68a78b-efca-4aa8-b690-0def09a60418"). InnerVolumeSpecName "kube-api-access-jhzqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.221437 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7n2v6" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.221938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.237131 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kbq8d"] Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.312026 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.312084 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.312148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.312201 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.312278 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhzqr\" (UniqueName: \"kubernetes.io/projected/8f68a78b-efca-4aa8-b690-0def09a60418-kube-api-access-jhzqr\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.406343 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5" path="/var/lib/kubelet/pods/0a4f9bb6-7bbf-4993-aa30-79f0f25d58a5/volumes" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.413440 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.413490 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.413545 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.413573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.417016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.417708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.420603 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.436443 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh\") pod \"glance-db-sync-kbq8d\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.565883 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbq8d" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.793465 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59xwd" event={"ID":"8f68a78b-efca-4aa8-b690-0def09a60418","Type":"ContainerDied","Data":"e42c22954e652a45899b2ac77daf7c75a5c2c8a0f0d84693a60591c6480708fe"} Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.793505 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42c22954e652a45899b2ac77daf7c75a5c2c8a0f0d84693a60591c6480708fe" Feb 14 11:00:10 crc kubenswrapper[4736]: I0214 11:00:10.793557 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59xwd" Feb 14 11:00:11 crc kubenswrapper[4736]: I0214 11:00:11.240422 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kbq8d"] Feb 14 11:00:11 crc kubenswrapper[4736]: W0214 11:00:11.249174 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7755c5ab_4aba_4e82_a6f7_e6d63ca8efe1.slice/crio-e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8 WatchSource:0}: Error finding container e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8: Status 404 returned error can't find the container with id e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8 Feb 14 11:00:11 crc kubenswrapper[4736]: I0214 11:00:11.804367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbq8d" event={"ID":"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1","Type":"ContainerStarted","Data":"e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8"} Feb 14 11:00:13 crc kubenswrapper[4736]: I0214 11:00:12.999868 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-59xwd"] Feb 14 11:00:13 crc kubenswrapper[4736]: I0214 11:00:13.010886 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-59xwd"] Feb 14 11:00:14 crc kubenswrapper[4736]: I0214 11:00:14.174399 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:14 crc kubenswrapper[4736]: E0214 11:00:14.174544 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 11:00:14 crc kubenswrapper[4736]: E0214 11:00:14.174735 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 11:00:14 crc kubenswrapper[4736]: E0214 11:00:14.174797 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift podName:0283d5c8-4795-458e-8faf-c4908c75e01e nodeName:}" failed. No retries permitted until 2026-02-14 11:00:30.174780853 +0000 UTC m=+1140.543408221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift") pod "swift-storage-0" (UID: "0283d5c8-4795-458e-8faf-c4908c75e01e") : configmap "swift-ring-files" not found Feb 14 11:00:14 crc kubenswrapper[4736]: I0214 11:00:14.408484 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f68a78b-efca-4aa8-b690-0def09a60418" path="/var/lib/kubelet/pods/8f68a78b-efca-4aa8-b690-0def09a60418/volumes" Feb 14 11:00:15 crc kubenswrapper[4736]: I0214 11:00:15.131457 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-msd5j" podUID="2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 11:00:15 crc kubenswrapper[4736]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 11:00:15 crc kubenswrapper[4736]: > Feb 14 11:00:15 crc kubenswrapper[4736]: I0214 11:00:15.348111 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.017893 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xhm89"] Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.019478 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.022486 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.027477 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhm89"] Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.140205 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.140285 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8b5\" (UniqueName: \"kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.244120 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.244268 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8b5\" (UniqueName: \"kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.273086 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8b5\" (UniqueName: \"kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.430509 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts\") pod \"root-account-create-update-xhm89\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:18 crc kubenswrapper[4736]: I0214 11:00:18.645581 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.132114 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-msd5j" podUID="2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 11:00:20 crc kubenswrapper[4736]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 11:00:20 crc kubenswrapper[4736]: > Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.257190 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6dm75" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.456178 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-msd5j-config-ptzwh"] Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.459889 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.464010 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.465844 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j-config-ptzwh"] Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499651 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499669 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499717 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rjw\" (UniqueName: \"kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.499787 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601209 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601478 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601799 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rjw\" (UniqueName: \"kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601938 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.602055 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.602270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601550 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.601509 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.603623 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.604166 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.627275 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rjw\" (UniqueName: \"kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw\") pod \"ovn-controller-msd5j-config-ptzwh\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:20 crc kubenswrapper[4736]: I0214 11:00:20.805754 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:21 crc kubenswrapper[4736]: I0214 11:00:21.270271 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 14 11:00:21 crc kubenswrapper[4736]: I0214 11:00:21.684641 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 14 11:00:21 crc kubenswrapper[4736]: I0214 11:00:21.903551 4736 generic.go:334] "Generic (PLEG): container finished" podID="02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" containerID="6c0970877ab3dd1b247b58c7301c4449c9d313c1c999600a2c43886a5107e2e0" exitCode=0 Feb 14 11:00:21 crc kubenswrapper[4736]: I0214 11:00:21.903624 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ccs82" event={"ID":"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a","Type":"ContainerDied","Data":"6c0970877ab3dd1b247b58c7301c4449c9d313c1c999600a2c43886a5107e2e0"} Feb 14 11:00:25 crc kubenswrapper[4736]: I0214 11:00:25.128477 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-msd5j" podUID="2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 11:00:25 crc kubenswrapper[4736]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 11:00:25 crc kubenswrapper[4736]: > Feb 14 11:00:26 crc kubenswrapper[4736]: I0214 11:00:26.916527 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccs82" Feb 14 11:00:26 crc kubenswrapper[4736]: I0214 11:00:26.955192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ccs82" event={"ID":"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a","Type":"ContainerDied","Data":"58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a"} Feb 14 11:00:26 crc kubenswrapper[4736]: I0214 11:00:26.955231 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58f482628a536da2c7d3f8339da1bb839b3db29e08a7ee2faeaf11b43e80413a" Feb 14 11:00:26 crc kubenswrapper[4736]: I0214 11:00:26.955282 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccs82" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.018798 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.018897 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.018937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.019004 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6tdm\" (UniqueName: \"kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.019021 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.019089 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.019169 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift\") pod \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\" (UID: \"02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a\") " Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.020826 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.021883 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.034045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm" (OuterVolumeSpecName: "kube-api-access-s6tdm") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "kube-api-access-s6tdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.044867 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.056050 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts" (OuterVolumeSpecName: "scripts") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.059105 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.059156 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" (UID: "02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.104462 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j-config-ptzwh"] Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121067 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6tdm\" (UniqueName: \"kubernetes.io/projected/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-kube-api-access-s6tdm\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121097 4736 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121110 4736 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121144 4736 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121154 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121166 4736 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.121177 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:27 crc kubenswrapper[4736]: E0214 11:00:27.334039 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 14 11:00:27 crc kubenswrapper[4736]: E0214 11:00:27.334380 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlxvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kbq8d_openstack(7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:00:27 crc kubenswrapper[4736]: E0214 11:00:27.335700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kbq8d" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.419819 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhm89"] Feb 14 11:00:27 crc kubenswrapper[4736]: W0214 11:00:27.432418 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e18e214_64e7_49ee_bd4a_29b91d1ac8eb.slice/crio-1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97 WatchSource:0}: Error finding container 1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97: Status 404 returned error can't find the container with id 1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97 Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.967807 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-ptzwh" event={"ID":"0e042322-31f9-4cd4-bfd7-558415528d17","Type":"ContainerStarted","Data":"5b6dc2138345fa033d4f01c9bb5922f780761973c57adbefe19b2db3312dca5d"} Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.968397 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-ptzwh" event={"ID":"0e042322-31f9-4cd4-bfd7-558415528d17","Type":"ContainerStarted","Data":"94f81097c6009b8b2a9cc34ebf4a2906248f09e7e55e4c35ed103f3d4bed45b5"} Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.971279 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhm89" event={"ID":"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb","Type":"ContainerStarted","Data":"854d4f26e50ae210147221370a867d170b3d197d869b79263c5ac8f658533ae8"} Feb 14 11:00:27 crc kubenswrapper[4736]: I0214 11:00:27.971350 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhm89" event={"ID":"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb","Type":"ContainerStarted","Data":"1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97"} Feb 14 11:00:27 crc kubenswrapper[4736]: E0214 11:00:27.972969 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-kbq8d" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" Feb 14 11:00:28 crc kubenswrapper[4736]: I0214 11:00:28.979617 4736 generic.go:334] "Generic (PLEG): container finished" podID="6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" containerID="854d4f26e50ae210147221370a867d170b3d197d869b79263c5ac8f658533ae8" exitCode=0 Feb 14 11:00:28 crc kubenswrapper[4736]: I0214 11:00:28.979710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhm89" event={"ID":"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb","Type":"ContainerDied","Data":"854d4f26e50ae210147221370a867d170b3d197d869b79263c5ac8f658533ae8"} Feb 14 11:00:28 crc kubenswrapper[4736]: I0214 11:00:28.981911 4736 generic.go:334] "Generic (PLEG): container finished" podID="0e042322-31f9-4cd4-bfd7-558415528d17" containerID="5b6dc2138345fa033d4f01c9bb5922f780761973c57adbefe19b2db3312dca5d" exitCode=0 Feb 14 11:00:28 crc kubenswrapper[4736]: I0214 11:00:28.981985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-ptzwh" event={"ID":"0e042322-31f9-4cd4-bfd7-558415528d17","Type":"ContainerDied","Data":"5b6dc2138345fa033d4f01c9bb5922f780761973c57adbefe19b2db3312dca5d"} Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.138250 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-msd5j" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.196691 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.210824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0283d5c8-4795-458e-8faf-c4908c75e01e-etc-swift\") pod \"swift-storage-0\" (UID: \"0283d5c8-4795-458e-8faf-c4908c75e01e\") " pod="openstack/swift-storage-0" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.385038 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.389533 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.451052 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501396 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501490 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts\") pod \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501525 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501569 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8b5\" (UniqueName: \"kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5\") pod \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\" (UID: \"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501615 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.501643 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4rjw\" (UniqueName: \"kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw\") pod \"0e042322-31f9-4cd4-bfd7-558415528d17\" (UID: \"0e042322-31f9-4cd4-bfd7-558415528d17\") " Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.502176 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.502628 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.502657 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run" (OuterVolumeSpecName: "var-run") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.502667 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts" (OuterVolumeSpecName: "scripts") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.502906 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.503039 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" (UID: "6e18e214-64e7-49ee-bd4a-29b91d1ac8eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.505916 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw" (OuterVolumeSpecName: "kube-api-access-l4rjw") pod "0e042322-31f9-4cd4-bfd7-558415528d17" (UID: "0e042322-31f9-4cd4-bfd7-558415528d17"). InnerVolumeSpecName "kube-api-access-l4rjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.507967 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5" (OuterVolumeSpecName: "kube-api-access-rh8b5") pod "6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" (UID: "6e18e214-64e7-49ee-bd4a-29b91d1ac8eb"). InnerVolumeSpecName "kube-api-access-rh8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603489 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603834 4736 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603846 4736 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603856 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8b5\" (UniqueName: \"kubernetes.io/projected/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb-kube-api-access-rh8b5\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603869 4736 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603918 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4rjw\" (UniqueName: \"kubernetes.io/projected/0e042322-31f9-4cd4-bfd7-558415528d17-kube-api-access-l4rjw\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603930 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e042322-31f9-4cd4-bfd7-558415528d17-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:30 crc kubenswrapper[4736]: I0214 11:00:30.603941 4736 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e042322-31f9-4cd4-bfd7-558415528d17-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.003060 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-ptzwh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.003541 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-ptzwh" event={"ID":"0e042322-31f9-4cd4-bfd7-558415528d17","Type":"ContainerDied","Data":"94f81097c6009b8b2a9cc34ebf4a2906248f09e7e55e4c35ed103f3d4bed45b5"} Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.003581 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f81097c6009b8b2a9cc34ebf4a2906248f09e7e55e4c35ed103f3d4bed45b5" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.007062 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.008440 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhm89" event={"ID":"6e18e214-64e7-49ee-bd4a-29b91d1ac8eb","Type":"ContainerDied","Data":"1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97"} Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.008483 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhm89" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.008488 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8e38f4e32b02504e5e200c2f508fc390de81c0db6fc3d7601c682c16681c97" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.269839 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.544120 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-msd5j-config-ptzwh"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.555783 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-msd5j-config-ptzwh"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.697057 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.729555 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fq9gh"] Feb 14 11:00:31 crc kubenswrapper[4736]: E0214 11:00:31.755018 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e042322-31f9-4cd4-bfd7-558415528d17" containerName="ovn-config" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.755054 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e042322-31f9-4cd4-bfd7-558415528d17" containerName="ovn-config" Feb 14 11:00:31 crc kubenswrapper[4736]: E0214 11:00:31.755085 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" containerName="swift-ring-rebalance" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.755092 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" containerName="swift-ring-rebalance" Feb 14 11:00:31 crc kubenswrapper[4736]: E0214 11:00:31.757757 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" containerName="mariadb-account-create-update" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.757791 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" containerName="mariadb-account-create-update" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.758094 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a" containerName="swift-ring-rebalance" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.758116 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e042322-31f9-4cd4-bfd7-558415528d17" containerName="ovn-config" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.758123 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" containerName="mariadb-account-create-update" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.758568 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fq9gh"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.758640 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.826373 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-msd5j-config-d7lnb"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.827712 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.829927 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.830954 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.831057 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28pt\" (UniqueName: \"kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.887504 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j-config-d7lnb"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932209 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ccv\" (UniqueName: \"kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28pt\" (UniqueName: \"kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932367 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.933016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.932394 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.933074 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.942904 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8f1b-account-create-update-8bd5x"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.944011 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.946339 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.955184 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8f1b-account-create-update-8bd5x"] Feb 14 11:00:31 crc kubenswrapper[4736]: I0214 11:00:31.964624 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28pt\" (UniqueName: \"kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt\") pod \"cinder-db-create-fq9gh\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.021776 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"a9e7303df842e5726f4183054c9110ba8a9f780a99df95eb8ccf5d48c049933d"} Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.029680 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vlfkq"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.031832 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.034510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.034703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47ccv\" (UniqueName: \"kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035119 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035277 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbq2\" (UniqueName: \"kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035412 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035518 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035595 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035686 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.035773 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.036144 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.036234 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.036283 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.038305 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.059582 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vlfkq"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.066156 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2108-account-create-update-gqshp"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.068802 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.076333 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.095980 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.102331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47ccv\" (UniqueName: \"kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv\") pod \"ovn-controller-msd5j-config-d7lnb\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.139990 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qz7t\" (UniqueName: \"kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.140049 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbq2\" (UniqueName: \"kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.140096 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.140146 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.140201 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.140244 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctprr\" (UniqueName: \"kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.141151 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.147486 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2108-account-create-update-gqshp"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.147857 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.158658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbq2\" (UniqueName: \"kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2\") pod \"cinder-8f1b-account-create-update-8bd5x\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.242046 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.242120 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.242164 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctprr\" (UniqueName: \"kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.242249 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qz7t\" (UniqueName: \"kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.243017 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.250566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.263618 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctprr\" (UniqueName: \"kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr\") pod \"barbican-2108-account-create-update-gqshp\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.266844 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qz7t\" (UniqueName: \"kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t\") pod \"barbican-db-create-vlfkq\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.270528 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.328238 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wm596"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.329342 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.337269 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.337293 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t8r6k" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.337475 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.338979 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.352796 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.355690 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wm596"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.388260 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.441110 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e042322-31f9-4cd4-bfd7-558415528d17" path="/var/lib/kubelet/pods/0e042322-31f9-4cd4-bfd7-558415528d17/volumes" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.448412 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.448509 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgh5\" (UniqueName: \"kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.448560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.550300 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.550374 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgh5\" (UniqueName: \"kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.550424 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.560211 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.569205 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.575722 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgh5\" (UniqueName: \"kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5\") pod \"keystone-db-sync-wm596\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.688317 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-msd5j-config-d7lnb"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.697181 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:32 crc kubenswrapper[4736]: W0214 11:00:32.697902 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb427a09e_39a8_448e_a1c9_9fe6cf3c6c4b.slice/crio-589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95 WatchSource:0}: Error finding container 589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95: Status 404 returned error can't find the container with id 589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95 Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.750427 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d904-account-create-update-7p9zr"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.751447 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.756714 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.764808 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zmqp8"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.765819 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.799219 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d904-account-create-update-7p9zr"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.822349 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zmqp8"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.846176 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8f1b-account-create-update-8bd5x"] Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.868982 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.869050 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrrg\" (UniqueName: \"kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.869085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7f8f\" (UniqueName: \"kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.869149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.973386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.973669 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrrg\" (UniqueName: \"kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.973724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7f8f\" (UniqueName: \"kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.973873 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.974506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:32 crc kubenswrapper[4736]: I0214 11:00:32.974734 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.004652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7f8f\" (UniqueName: \"kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f\") pod \"neutron-d904-account-create-update-7p9zr\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.011005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrrg\" (UniqueName: \"kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg\") pod \"neutron-db-create-zmqp8\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.102185 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.107130 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.111853 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fq9gh"] Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.117422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f1b-account-create-update-8bd5x" event={"ID":"7fe4563b-fc89-4e16-9fb7-f832fc1cf699","Type":"ContainerStarted","Data":"bf22edb45d3f4dff6997e349b2503d46fb44beb540012ac4219ad4d8df2ebccd"} Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.117473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f1b-account-create-update-8bd5x" event={"ID":"7fe4563b-fc89-4e16-9fb7-f832fc1cf699","Type":"ContainerStarted","Data":"d2c66f60c5342e7f330d76c2b381424275d50658bd456d989feb8bc643606b70"} Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.125462 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vlfkq"] Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.138891 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-d7lnb" event={"ID":"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b","Type":"ContainerStarted","Data":"589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95"} Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.270157 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8f1b-account-create-update-8bd5x" podStartSLOduration=2.270141149 podStartE2EDuration="2.270141149s" podCreationTimestamp="2026-02-14 11:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:33.137690428 +0000 UTC m=+1143.506317796" watchObservedRunningTime="2026-02-14 11:00:33.270141149 +0000 UTC m=+1143.638768517" Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.278054 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2108-account-create-update-gqshp"] Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.383172 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wm596"] Feb 14 11:00:33 crc kubenswrapper[4736]: W0214 11:00:33.471140 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d4e563c_8b5a_4405_a623_805bd1da0ef3.slice/crio-4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de WatchSource:0}: Error finding container 4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de: Status 404 returned error can't find the container with id 4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de Feb 14 11:00:33 crc kubenswrapper[4736]: W0214 11:00:33.475137 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac413b61_b5c8_44d6_9968_b2a2e166ae25.slice/crio-29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4 WatchSource:0}: Error finding container 29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4: Status 404 returned error can't find the container with id 29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4 Feb 14 11:00:33 crc kubenswrapper[4736]: W0214 11:00:33.476379 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3515d4a8_4470_4062_99c4_54388510f693.slice/crio-213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353 WatchSource:0}: Error finding container 213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353: Status 404 returned error can't find the container with id 213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353 Feb 14 11:00:33 crc kubenswrapper[4736]: I0214 11:00:33.895429 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zmqp8"] Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.041634 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d904-account-create-update-7p9zr"] Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.158538 4736 generic.go:334] "Generic (PLEG): container finished" podID="7fe4563b-fc89-4e16-9fb7-f832fc1cf699" containerID="bf22edb45d3f4dff6997e349b2503d46fb44beb540012ac4219ad4d8df2ebccd" exitCode=0 Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.158637 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f1b-account-create-update-8bd5x" event={"ID":"7fe4563b-fc89-4e16-9fb7-f832fc1cf699","Type":"ContainerDied","Data":"bf22edb45d3f4dff6997e349b2503d46fb44beb540012ac4219ad4d8df2ebccd"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.162472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wm596" event={"ID":"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d","Type":"ContainerStarted","Data":"93120f70b9f3af2b768530372d5b4ce389eab9ad235d55feded7cca48b9a8c4a"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.174033 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d904-account-create-update-7p9zr" event={"ID":"183769d0-3dde-43ba-995a-16aa55c72ff8","Type":"ContainerStarted","Data":"ae0a981bb92035758cdfbcb6128418bd364ddaca4a3fa9c600b732d86f5f8384"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.188580 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"185037ef367b0f74f5e12fb59b247797d644feef1f5f040f9f6270d2b7ea1f5c"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.192088 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2108-account-create-update-gqshp" event={"ID":"3515d4a8-4470-4062-99c4-54388510f693","Type":"ContainerStarted","Data":"6e26534a72b30431578facb514b6fa11fbc9fff7e86ae4a6671c56e10a76e1e1"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.192124 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2108-account-create-update-gqshp" event={"ID":"3515d4a8-4470-4062-99c4-54388510f693","Type":"ContainerStarted","Data":"213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.199481 4736 generic.go:334] "Generic (PLEG): container finished" podID="b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" containerID="7d39be7bf2b580adcdd781b4e6826c3f49bf118e9ed055c5f362a24b96855639" exitCode=0 Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.199564 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-d7lnb" event={"ID":"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b","Type":"ContainerDied","Data":"7d39be7bf2b580adcdd781b4e6826c3f49bf118e9ed055c5f362a24b96855639"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.216110 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fq9gh" event={"ID":"6d4e563c-8b5a-4405-a623-805bd1da0ef3","Type":"ContainerStarted","Data":"d7005beb39ace8e2da9d9690c4fa2a56b09fdabeee964c54ab9a7b4481ab0e0c"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.216154 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fq9gh" event={"ID":"6d4e563c-8b5a-4405-a623-805bd1da0ef3","Type":"ContainerStarted","Data":"4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.218590 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2108-account-create-update-gqshp" podStartSLOduration=2.218570785 podStartE2EDuration="2.218570785s" podCreationTimestamp="2026-02-14 11:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:34.216680361 +0000 UTC m=+1144.585307739" watchObservedRunningTime="2026-02-14 11:00:34.218570785 +0000 UTC m=+1144.587198153" Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.231482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vlfkq" event={"ID":"ac413b61-b5c8-44d6-9968-b2a2e166ae25","Type":"ContainerStarted","Data":"41744a3fa865f70465ea016a3af02102a06751f863407c3cb340f9ddd2d757af"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.231526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vlfkq" event={"ID":"ac413b61-b5c8-44d6-9968-b2a2e166ae25","Type":"ContainerStarted","Data":"29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.233428 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zmqp8" event={"ID":"af05acd6-857a-4997-a369-54921d3db536","Type":"ContainerStarted","Data":"228a6235fc2953bef50d4ad1258655baa52ad3c04ee47f87742f41bd15c5ef5f"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.233451 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zmqp8" event={"ID":"af05acd6-857a-4997-a369-54921d3db536","Type":"ContainerStarted","Data":"16f46c4a60c1b104a5e87b679c81960f7f0cb2da0633355d1bd9e8b3997db3ff"} Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.251400 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-fq9gh" podStartSLOduration=3.251380229 podStartE2EDuration="3.251380229s" podCreationTimestamp="2026-02-14 11:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:34.242248606 +0000 UTC m=+1144.610875974" watchObservedRunningTime="2026-02-14 11:00:34.251380229 +0000 UTC m=+1144.620007597" Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.343394 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vlfkq" podStartSLOduration=3.343377906 podStartE2EDuration="3.343377906s" podCreationTimestamp="2026-02-14 11:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:34.321382683 +0000 UTC m=+1144.690010051" watchObservedRunningTime="2026-02-14 11:00:34.343377906 +0000 UTC m=+1144.712005274" Feb 14 11:00:34 crc kubenswrapper[4736]: I0214 11:00:34.353517 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-zmqp8" podStartSLOduration=2.353501467 podStartE2EDuration="2.353501467s" podCreationTimestamp="2026-02-14 11:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:34.339851464 +0000 UTC m=+1144.708478832" watchObservedRunningTime="2026-02-14 11:00:34.353501467 +0000 UTC m=+1144.722128835" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.245574 4736 generic.go:334] "Generic (PLEG): container finished" podID="6d4e563c-8b5a-4405-a623-805bd1da0ef3" containerID="d7005beb39ace8e2da9d9690c4fa2a56b09fdabeee964c54ab9a7b4481ab0e0c" exitCode=0 Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.245619 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fq9gh" event={"ID":"6d4e563c-8b5a-4405-a623-805bd1da0ef3","Type":"ContainerDied","Data":"d7005beb39ace8e2da9d9690c4fa2a56b09fdabeee964c54ab9a7b4481ab0e0c"} Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.252116 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac413b61-b5c8-44d6-9968-b2a2e166ae25" containerID="41744a3fa865f70465ea016a3af02102a06751f863407c3cb340f9ddd2d757af" exitCode=0 Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.252195 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vlfkq" event={"ID":"ac413b61-b5c8-44d6-9968-b2a2e166ae25","Type":"ContainerDied","Data":"41744a3fa865f70465ea016a3af02102a06751f863407c3cb340f9ddd2d757af"} Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.253529 4736 generic.go:334] "Generic (PLEG): container finished" podID="af05acd6-857a-4997-a369-54921d3db536" containerID="228a6235fc2953bef50d4ad1258655baa52ad3c04ee47f87742f41bd15c5ef5f" exitCode=0 Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.253588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zmqp8" event={"ID":"af05acd6-857a-4997-a369-54921d3db536","Type":"ContainerDied","Data":"228a6235fc2953bef50d4ad1258655baa52ad3c04ee47f87742f41bd15c5ef5f"} Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.254918 4736 generic.go:334] "Generic (PLEG): container finished" podID="3515d4a8-4470-4062-99c4-54388510f693" containerID="6e26534a72b30431578facb514b6fa11fbc9fff7e86ae4a6671c56e10a76e1e1" exitCode=0 Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.255074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2108-account-create-update-gqshp" event={"ID":"3515d4a8-4470-4062-99c4-54388510f693","Type":"ContainerDied","Data":"6e26534a72b30431578facb514b6fa11fbc9fff7e86ae4a6671c56e10a76e1e1"} Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.683862 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.685227 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840522 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840634 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840756 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47ccv\" (UniqueName: \"kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840778 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840892 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts\") pod \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840925 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbq2\" (UniqueName: \"kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2\") pod \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\" (UID: \"7fe4563b-fc89-4e16-9fb7-f832fc1cf699\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840941 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.840976 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts\") pod \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\" (UID: \"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b\") " Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.841392 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.841363 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run" (OuterVolumeSpecName: "var-run") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.841631 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fe4563b-fc89-4e16-9fb7-f832fc1cf699" (UID: "7fe4563b-fc89-4e16-9fb7-f832fc1cf699"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842311 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts" (OuterVolumeSpecName: "scripts") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842611 4736 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842647 4736 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842659 4736 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842669 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842699 4736 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.842708 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.847835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2" (OuterVolumeSpecName: "kube-api-access-4vbq2") pod "7fe4563b-fc89-4e16-9fb7-f832fc1cf699" (UID: "7fe4563b-fc89-4e16-9fb7-f832fc1cf699"). InnerVolumeSpecName "kube-api-access-4vbq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.848033 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv" (OuterVolumeSpecName: "kube-api-access-47ccv") pod "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" (UID: "b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b"). InnerVolumeSpecName "kube-api-access-47ccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.943955 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47ccv\" (UniqueName: \"kubernetes.io/projected/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b-kube-api-access-47ccv\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:35 crc kubenswrapper[4736]: I0214 11:00:35.943992 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbq2\" (UniqueName: \"kubernetes.io/projected/7fe4563b-fc89-4e16-9fb7-f832fc1cf699-kube-api-access-4vbq2\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.264159 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"6c349aa5d72acc218d8b4e6c1dad83f02a2767aecdfc65aba7ae08e8eefd5219"} Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.264482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"dd03842baef63e20549b1447858cf0fe86af47286719739ab387a225e2295eb0"} Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.265449 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-msd5j-config-d7lnb" event={"ID":"b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b","Type":"ContainerDied","Data":"589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95"} Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.265483 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589da2389ddd7bd09f2ea1cba5fa7e25ccddfd6dcc86a7059ad7ba21c26edc95" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.265494 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-msd5j-config-d7lnb" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.269786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f1b-account-create-update-8bd5x" event={"ID":"7fe4563b-fc89-4e16-9fb7-f832fc1cf699","Type":"ContainerDied","Data":"d2c66f60c5342e7f330d76c2b381424275d50658bd456d989feb8bc643606b70"} Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.269815 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2c66f60c5342e7f330d76c2b381424275d50658bd456d989feb8bc643606b70" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.269880 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f1b-account-create-update-8bd5x" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.275731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d904-account-create-update-7p9zr" event={"ID":"183769d0-3dde-43ba-995a-16aa55c72ff8","Type":"ContainerStarted","Data":"1fed47927ab41ff09643f034e4c000b2d05e2486c1e4c6d694c29187755e54c9"} Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.309110 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d904-account-create-update-7p9zr" podStartSLOduration=4.309090609 podStartE2EDuration="4.309090609s" podCreationTimestamp="2026-02-14 11:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:36.297698291 +0000 UTC m=+1146.666325659" watchObservedRunningTime="2026-02-14 11:00:36.309090609 +0000 UTC m=+1146.677717977" Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.761251 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-msd5j-config-d7lnb"] Feb 14 11:00:36 crc kubenswrapper[4736]: I0214 11:00:36.767495 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-msd5j-config-d7lnb"] Feb 14 11:00:37 crc kubenswrapper[4736]: I0214 11:00:37.291558 4736 generic.go:334] "Generic (PLEG): container finished" podID="183769d0-3dde-43ba-995a-16aa55c72ff8" containerID="1fed47927ab41ff09643f034e4c000b2d05e2486c1e4c6d694c29187755e54c9" exitCode=0 Feb 14 11:00:37 crc kubenswrapper[4736]: I0214 11:00:37.291708 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d904-account-create-update-7p9zr" event={"ID":"183769d0-3dde-43ba-995a-16aa55c72ff8","Type":"ContainerDied","Data":"1fed47927ab41ff09643f034e4c000b2d05e2486c1e4c6d694c29187755e54c9"} Feb 14 11:00:37 crc kubenswrapper[4736]: I0214 11:00:37.296418 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"694ceb87f292685da9165f28c4cc54a0755a16085ed0b89340ce79da7f36a141"} Feb 14 11:00:38 crc kubenswrapper[4736]: I0214 11:00:38.417049 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" path="/var/lib/kubelet/pods/b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b/volumes" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.090653 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.150350 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.150467 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.155524 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.186357 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.200028 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qz7t\" (UniqueName: \"kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t\") pod \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.200102 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts\") pod \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\" (UID: \"ac413b61-b5c8-44d6-9968-b2a2e166ae25\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.200977 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac413b61-b5c8-44d6-9968-b2a2e166ae25" (UID: "ac413b61-b5c8-44d6-9968-b2a2e166ae25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.206918 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t" (OuterVolumeSpecName: "kube-api-access-2qz7t") pod "ac413b61-b5c8-44d6-9968-b2a2e166ae25" (UID: "ac413b61-b5c8-44d6-9968-b2a2e166ae25"). InnerVolumeSpecName "kube-api-access-2qz7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.301900 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts\") pod \"af05acd6-857a-4997-a369-54921d3db536\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302016 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts\") pod \"3515d4a8-4470-4062-99c4-54388510f693\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302096 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28pt\" (UniqueName: \"kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt\") pod \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302626 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af05acd6-857a-4997-a369-54921d3db536" (UID: "af05acd6-857a-4997-a369-54921d3db536"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302828 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3515d4a8-4470-4062-99c4-54388510f693" (UID: "3515d4a8-4470-4062-99c4-54388510f693"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302893 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7f8f\" (UniqueName: \"kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f\") pod \"183769d0-3dde-43ba-995a-16aa55c72ff8\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302952 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts\") pod \"183769d0-3dde-43ba-995a-16aa55c72ff8\" (UID: \"183769d0-3dde-43ba-995a-16aa55c72ff8\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.302978 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlrrg\" (UniqueName: \"kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg\") pod \"af05acd6-857a-4997-a369-54921d3db536\" (UID: \"af05acd6-857a-4997-a369-54921d3db536\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303046 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctprr\" (UniqueName: \"kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr\") pod \"3515d4a8-4470-4062-99c4-54388510f693\" (UID: \"3515d4a8-4470-4062-99c4-54388510f693\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303082 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts\") pod \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\" (UID: \"6d4e563c-8b5a-4405-a623-805bd1da0ef3\") " Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303620 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qz7t\" (UniqueName: \"kubernetes.io/projected/ac413b61-b5c8-44d6-9968-b2a2e166ae25-kube-api-access-2qz7t\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303636 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac413b61-b5c8-44d6-9968-b2a2e166ae25-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303645 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af05acd6-857a-4997-a369-54921d3db536-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303654 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3515d4a8-4470-4062-99c4-54388510f693-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303671 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "183769d0-3dde-43ba-995a-16aa55c72ff8" (UID: "183769d0-3dde-43ba-995a-16aa55c72ff8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.303853 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d4e563c-8b5a-4405-a623-805bd1da0ef3" (UID: "6d4e563c-8b5a-4405-a623-805bd1da0ef3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.305291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt" (OuterVolumeSpecName: "kube-api-access-j28pt") pod "6d4e563c-8b5a-4405-a623-805bd1da0ef3" (UID: "6d4e563c-8b5a-4405-a623-805bd1da0ef3"). InnerVolumeSpecName "kube-api-access-j28pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.305605 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr" (OuterVolumeSpecName: "kube-api-access-ctprr") pod "3515d4a8-4470-4062-99c4-54388510f693" (UID: "3515d4a8-4470-4062-99c4-54388510f693"). InnerVolumeSpecName "kube-api-access-ctprr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.307301 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg" (OuterVolumeSpecName: "kube-api-access-nlrrg") pod "af05acd6-857a-4997-a369-54921d3db536" (UID: "af05acd6-857a-4997-a369-54921d3db536"). InnerVolumeSpecName "kube-api-access-nlrrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.307774 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f" (OuterVolumeSpecName: "kube-api-access-g7f8f") pod "183769d0-3dde-43ba-995a-16aa55c72ff8" (UID: "183769d0-3dde-43ba-995a-16aa55c72ff8"). InnerVolumeSpecName "kube-api-access-g7f8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.311647 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wm596" event={"ID":"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d","Type":"ContainerStarted","Data":"5400047ea1aa4428f622735a7a13601b46e835600b5b9c51877d2f2d49b7d805"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.313699 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d904-account-create-update-7p9zr" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.314495 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d904-account-create-update-7p9zr" event={"ID":"183769d0-3dde-43ba-995a-16aa55c72ff8","Type":"ContainerDied","Data":"ae0a981bb92035758cdfbcb6128418bd364ddaca4a3fa9c600b732d86f5f8384"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.314533 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0a981bb92035758cdfbcb6128418bd364ddaca4a3fa9c600b732d86f5f8384" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.321548 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fq9gh" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.323445 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fq9gh" event={"ID":"6d4e563c-8b5a-4405-a623-805bd1da0ef3","Type":"ContainerDied","Data":"4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.323482 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a13faa5dbf26c2a5d7349f34db5273e43e75c1fb634c28fb3a326f2cceca2de" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.337150 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vlfkq" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.337184 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vlfkq" event={"ID":"ac413b61-b5c8-44d6-9968-b2a2e166ae25","Type":"ContainerDied","Data":"29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.337564 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29b4c12ea5f652b8037828096df1cdecc6b88a8328c699349172dfe1fab56db4" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.339027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zmqp8" event={"ID":"af05acd6-857a-4997-a369-54921d3db536","Type":"ContainerDied","Data":"16f46c4a60c1b104a5e87b679c81960f7f0cb2da0633355d1bd9e8b3997db3ff"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.339055 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f46c4a60c1b104a5e87b679c81960f7f0cb2da0633355d1bd9e8b3997db3ff" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.339097 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zmqp8" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.342134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2108-account-create-update-gqshp" event={"ID":"3515d4a8-4470-4062-99c4-54388510f693","Type":"ContainerDied","Data":"213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353"} Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.342175 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213d07d655530515d469536e62a090080607545a55798a47ffe95411af27a353" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.342156 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2108-account-create-update-gqshp" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.342665 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wm596" podStartSLOduration=1.852528655 podStartE2EDuration="7.342653505s" podCreationTimestamp="2026-02-14 11:00:32 +0000 UTC" firstStartedPulling="2026-02-14 11:00:33.511014279 +0000 UTC m=+1143.879641647" lastFinishedPulling="2026-02-14 11:00:39.001139129 +0000 UTC m=+1149.369766497" observedRunningTime="2026-02-14 11:00:39.330624068 +0000 UTC m=+1149.699251446" watchObservedRunningTime="2026-02-14 11:00:39.342653505 +0000 UTC m=+1149.711280873" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405223 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28pt\" (UniqueName: \"kubernetes.io/projected/6d4e563c-8b5a-4405-a623-805bd1da0ef3-kube-api-access-j28pt\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405343 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7f8f\" (UniqueName: \"kubernetes.io/projected/183769d0-3dde-43ba-995a-16aa55c72ff8-kube-api-access-g7f8f\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405375 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183769d0-3dde-43ba-995a-16aa55c72ff8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405386 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlrrg\" (UniqueName: \"kubernetes.io/projected/af05acd6-857a-4997-a369-54921d3db536-kube-api-access-nlrrg\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405397 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctprr\" (UniqueName: \"kubernetes.io/projected/3515d4a8-4470-4062-99c4-54388510f693-kube-api-access-ctprr\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:39 crc kubenswrapper[4736]: I0214 11:00:39.405410 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e563c-8b5a-4405-a623-805bd1da0ef3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:40 crc kubenswrapper[4736]: I0214 11:00:40.353480 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"3fa16962fc0e704181621ee9c1c2d385668712c6ee0ad6595d9e5f5669d0c669"} Feb 14 11:00:40 crc kubenswrapper[4736]: I0214 11:00:40.353849 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"f56f3903bb04ed3daa3c0803577421326430c58cd1f229e186cdcef2c76262c7"} Feb 14 11:00:40 crc kubenswrapper[4736]: I0214 11:00:40.353866 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"0d28ed6234399a57f355c4af96f00ac7da35795846c5d276db65164a449fb8e1"} Feb 14 11:00:41 crc kubenswrapper[4736]: I0214 11:00:41.363043 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"95e76c409ec1422d96db1aaad3e3922d5d397ff250d4dec7f1f9ca3407c197bc"} Feb 14 11:00:42 crc kubenswrapper[4736]: I0214 11:00:42.409356 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbq8d" event={"ID":"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1","Type":"ContainerStarted","Data":"c68572b4f95e3350f80c134eacf2c8bad1e8c242e1941b227b8aae42e6db8d8d"} Feb 14 11:00:42 crc kubenswrapper[4736]: I0214 11:00:42.413838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"f9fa4fc37808130a7b2d86521dce3b07df01410d30170f47811360864b3a1637"} Feb 14 11:00:42 crc kubenswrapper[4736]: I0214 11:00:42.414004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"b9e358ac586e9031c383d98ef3613cbd9d5b8d681fed7f18860eca879e8ee7cf"} Feb 14 11:00:42 crc kubenswrapper[4736]: I0214 11:00:42.414076 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"8ae35f43e1bb631a62e94b7933cdf1300f8dda249bc4ed2cd009c43dea94238b"} Feb 14 11:00:42 crc kubenswrapper[4736]: I0214 11:00:42.428193 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kbq8d" podStartSLOduration=2.716759336 podStartE2EDuration="32.428177476s" podCreationTimestamp="2026-02-14 11:00:10 +0000 UTC" firstStartedPulling="2026-02-14 11:00:11.251919982 +0000 UTC m=+1121.620547350" lastFinishedPulling="2026-02-14 11:00:40.963338122 +0000 UTC m=+1151.331965490" observedRunningTime="2026-02-14 11:00:42.423349227 +0000 UTC m=+1152.791976595" watchObservedRunningTime="2026-02-14 11:00:42.428177476 +0000 UTC m=+1152.796804844" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.429226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"8892d26817176e0386b62c9397a68f364e256bef392480ac1e229a11de061ee1"} Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.430758 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"4409283eb9a0aa9c650e62095c12b937ad0f7554efb1c8b0cea29d37eaac1541"} Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.430861 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"3287b91184009abdb0fdbda192006862e7ed850264e78d8fda9420edafdb52cf"} Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.430940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0283d5c8-4795-458e-8faf-c4908c75e01e","Type":"ContainerStarted","Data":"81e5e74e78d8bd9885027cfae5c1afbc9e3b060a29801f94cc31089db77482bf"} Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.475710 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.833592609 podStartE2EDuration="46.475685692s" podCreationTimestamp="2026-02-14 10:59:57 +0000 UTC" firstStartedPulling="2026-02-14 11:00:31.002362895 +0000 UTC m=+1141.370990303" lastFinishedPulling="2026-02-14 11:00:41.644455988 +0000 UTC m=+1152.013083386" observedRunningTime="2026-02-14 11:00:43.467321862 +0000 UTC m=+1153.835949250" watchObservedRunningTime="2026-02-14 11:00:43.475685692 +0000 UTC m=+1153.844313080" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.773197 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774270 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe4563b-fc89-4e16-9fb7-f832fc1cf699" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774291 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe4563b-fc89-4e16-9fb7-f832fc1cf699" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774309 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac413b61-b5c8-44d6-9968-b2a2e166ae25" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774315 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac413b61-b5c8-44d6-9968-b2a2e166ae25" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774339 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3515d4a8-4470-4062-99c4-54388510f693" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774349 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3515d4a8-4470-4062-99c4-54388510f693" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774369 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" containerName="ovn-config" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774376 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" containerName="ovn-config" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774390 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183769d0-3dde-43ba-995a-16aa55c72ff8" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774396 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="183769d0-3dde-43ba-995a-16aa55c72ff8" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774420 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af05acd6-857a-4997-a369-54921d3db536" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774426 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af05acd6-857a-4997-a369-54921d3db536" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: E0214 11:00:43.774438 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4e563c-8b5a-4405-a623-805bd1da0ef3" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774444 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4e563c-8b5a-4405-a623-805bd1da0ef3" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774752 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4e563c-8b5a-4405-a623-805bd1da0ef3" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774768 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="af05acd6-857a-4997-a369-54921d3db536" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774778 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3515d4a8-4470-4062-99c4-54388510f693" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774788 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe4563b-fc89-4e16-9fb7-f832fc1cf699" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774803 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="183769d0-3dde-43ba-995a-16aa55c72ff8" containerName="mariadb-account-create-update" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774816 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac413b61-b5c8-44d6-9968-b2a2e166ae25" containerName="mariadb-database-create" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.774834 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b427a09e-39a8-448e-a1c9-9fe6cf3c6c4b" containerName="ovn-config" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.776126 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.795481 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.800977 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.881810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.881929 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.882001 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.882029 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.882052 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2t8\" (UniqueName: \"kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.882080 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.983767 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.984066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2t8\" (UniqueName: \"kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.984178 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.984309 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.984415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.984532 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.985162 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.985283 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.985287 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.985616 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:43 crc kubenswrapper[4736]: I0214 11:00:43.986782 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:44 crc kubenswrapper[4736]: I0214 11:00:44.004476 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2t8\" (UniqueName: \"kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8\") pod \"dnsmasq-dns-764c5664d7-z6crm\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:44 crc kubenswrapper[4736]: I0214 11:00:44.115948 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:44 crc kubenswrapper[4736]: I0214 11:00:44.597195 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:45 crc kubenswrapper[4736]: I0214 11:00:45.443695 4736 generic.go:334] "Generic (PLEG): container finished" podID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerID="dfdf239a4a383fbea5522806e982518f4f4633f08afb58ebc5b11db62fa1f516" exitCode=0 Feb 14 11:00:45 crc kubenswrapper[4736]: I0214 11:00:45.443792 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" event={"ID":"67db18bb-5e92-47e1-b1e7-fe042c74cdd3","Type":"ContainerDied","Data":"dfdf239a4a383fbea5522806e982518f4f4633f08afb58ebc5b11db62fa1f516"} Feb 14 11:00:45 crc kubenswrapper[4736]: I0214 11:00:45.444301 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" event={"ID":"67db18bb-5e92-47e1-b1e7-fe042c74cdd3","Type":"ContainerStarted","Data":"a6f1e9603f2de06e660056d87500a5ddddcecf16f6f9e95c1c41bd3fbdaa0a8c"} Feb 14 11:00:46 crc kubenswrapper[4736]: I0214 11:00:46.456679 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" event={"ID":"67db18bb-5e92-47e1-b1e7-fe042c74cdd3","Type":"ContainerStarted","Data":"44880df19741d5d830ee8877ab5eedd4703473b18794184ae33f2992cdc53b9a"} Feb 14 11:00:46 crc kubenswrapper[4736]: I0214 11:00:46.457045 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:46 crc kubenswrapper[4736]: I0214 11:00:46.496300 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" podStartSLOduration=3.496272584 podStartE2EDuration="3.496272584s" podCreationTimestamp="2026-02-14 11:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:46.485885775 +0000 UTC m=+1156.854513213" watchObservedRunningTime="2026-02-14 11:00:46.496272584 +0000 UTC m=+1156.864899992" Feb 14 11:00:48 crc kubenswrapper[4736]: I0214 11:00:48.479274 4736 generic.go:334] "Generic (PLEG): container finished" podID="efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" containerID="5400047ea1aa4428f622735a7a13601b46e835600b5b9c51877d2f2d49b7d805" exitCode=0 Feb 14 11:00:48 crc kubenswrapper[4736]: I0214 11:00:48.479368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wm596" event={"ID":"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d","Type":"ContainerDied","Data":"5400047ea1aa4428f622735a7a13601b46e835600b5b9c51877d2f2d49b7d805"} Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.791035 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.888708 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgh5\" (UniqueName: \"kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5\") pod \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.888788 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data\") pod \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.888838 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle\") pod \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\" (UID: \"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d\") " Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.896582 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5" (OuterVolumeSpecName: "kube-api-access-xqgh5") pod "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" (UID: "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d"). InnerVolumeSpecName "kube-api-access-xqgh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.932701 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data" (OuterVolumeSpecName: "config-data") pod "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" (UID: "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.941189 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" (UID: "efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.991661 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.991781 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgh5\" (UniqueName: \"kubernetes.io/projected/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-kube-api-access-xqgh5\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:49 crc kubenswrapper[4736]: I0214 11:00:49.991809 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.497388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wm596" event={"ID":"efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d","Type":"ContainerDied","Data":"93120f70b9f3af2b768530372d5b4ce389eab9ad235d55feded7cca48b9a8c4a"} Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.497435 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93120f70b9f3af2b768530372d5b4ce389eab9ad235d55feded7cca48b9a8c4a" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.497526 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wm596" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.798974 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.799553 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="dnsmasq-dns" containerID="cri-o://44880df19741d5d830ee8877ab5eedd4703473b18794184ae33f2992cdc53b9a" gracePeriod=10 Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.803913 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.838231 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ksvjl"] Feb 14 11:00:50 crc kubenswrapper[4736]: E0214 11:00:50.839075 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" containerName="keystone-db-sync" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.839157 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" containerName="keystone-db-sync" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.839535 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" containerName="keystone-db-sync" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.840319 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.861347 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.861564 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.861819 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.861932 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.862085 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t8r6k" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.887312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ksvjl"] Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.900608 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.902067 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.933851 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.937768 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.938000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.938110 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.938198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99lj\" (UniqueName: \"kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.938283 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:50 crc kubenswrapper[4736]: I0214 11:00:50.938388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.021346 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9bdr9"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.022375 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.029130 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jplsk" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.029341 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.037176 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.042673 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.044059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.044256 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.044372 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99lj\" (UniqueName: \"kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.044561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.044666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.045674 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.045932 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.046025 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.046129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.046208 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vl9\" (UniqueName: \"kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.046355 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.064113 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.064533 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9bdr9"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.066090 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.067476 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.070500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.072142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.102551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99lj\" (UniqueName: \"kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj\") pod \"keystone-bootstrap-ksvjl\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.147891 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.147943 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.147966 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vl9\" (UniqueName: \"kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw8rk\" (UniqueName: \"kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148065 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148103 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148161 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148177 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.148966 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.149538 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.150302 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.150815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.156015 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.165850 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.167213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.171173 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.177077 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-qsqhr" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.177271 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.178205 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.179282 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.209182 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.218788 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vl9\" (UniqueName: \"kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9\") pod \"dnsmasq-dns-5959f8865f-hpxbj\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.238111 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.270726 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271761 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271831 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw8rk\" (UniqueName: \"kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271856 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271929 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g647v\" (UniqueName: \"kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271948 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271970 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.271984 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.272005 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.272107 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.273843 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.280023 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.287100 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.295598 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.295828 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.303170 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2kwz6"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.309392 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.312250 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.312320 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.313731 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nzfzh" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.314470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.319109 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.319384 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.342702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.352576 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw8rk\" (UniqueName: \"kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk\") pod \"cinder-db-sync-9bdr9\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.361767 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2kwz6"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.374890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.375009 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2pk\" (UniqueName: \"kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.375089 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.375812 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.375947 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376083 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g647v\" (UniqueName: \"kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376248 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.376625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.377813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.386136 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.386200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.386751 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.408814 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.426226 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-z89bc"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.427517 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.432394 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.432569 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.432710 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cshh5" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.450335 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g647v\" (UniqueName: \"kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v\") pod \"horizon-5644b876d5-wp4lb\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.452877 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.454351 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.477871 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4ksm2"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.479071 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.479901 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh2n2\" (UniqueName: \"kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.479951 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.479987 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480052 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480153 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2pk\" (UniqueName: \"kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480168 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.480269 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.483930 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.484933 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.490557 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.487255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.484963 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mmvt8" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.505505 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.523388 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.524538 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.524993 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.542594 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2pk\" (UniqueName: \"kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk\") pod \"ceilometer-0\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.552587 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z89bc"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.566126 4736 generic.go:334] "Generic (PLEG): container finished" podID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerID="44880df19741d5d830ee8877ab5eedd4703473b18794184ae33f2992cdc53b9a" exitCode=0 Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.566347 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" event={"ID":"67db18bb-5e92-47e1-b1e7-fe042c74cdd3","Type":"ContainerDied","Data":"44880df19741d5d830ee8877ab5eedd4703473b18794184ae33f2992cdc53b9a"} Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.580319 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ksm2"] Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581142 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581213 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrscn\" (UniqueName: \"kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581233 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581254 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581324 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh2n2\" (UniqueName: \"kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581347 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581366 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581382 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581400 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb8j6\" (UniqueName: \"kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581427 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581441 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkpm\" (UniqueName: \"kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581477 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581506 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.581523 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.590097 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.601986 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.619860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh2n2\" (UniqueName: \"kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.638987 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.639177 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config\") pod \"neutron-db-sync-2kwz6\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.648006 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.663849 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.683966 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.684042 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.684084 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.686650 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.686847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrscn\" (UniqueName: \"kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.686937 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687021 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687134 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687354 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687538 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb8j6\" (UniqueName: \"kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.687659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkpm\" (UniqueName: \"kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.711914 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.712472 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.721562 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.727391 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.730669 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.736089 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.737833 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.737868 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.738101 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.743599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkpm\" (UniqueName: \"kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm\") pod \"barbican-db-sync-4ksm2\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.749878 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.752732 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrscn\" (UniqueName: \"kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn\") pod \"placement-db-sync-z89bc\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.757100 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z89bc" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.765051 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb8j6\" (UniqueName: \"kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6\") pod \"horizon-6f8476fff7-jvqbj\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:51 crc kubenswrapper[4736]: I0214 11:00:51.765262 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.790967 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.792642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.816249 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.835296 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.863904 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901407 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901464 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901496 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901546 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901582 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crng7\" (UniqueName: \"kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:51.901631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006537 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006584 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006693 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crng7\" (UniqueName: \"kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.006728 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.007609 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.008133 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.008630 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.021271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.021776 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.057568 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crng7\" (UniqueName: \"kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7\") pod \"dnsmasq-dns-58dd9ff6bc-zm8d5\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:52 crc kubenswrapper[4736]: I0214 11:00:52.154354 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.006381 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.109092 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ksvjl"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.124644 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133077 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133197 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133240 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133269 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2t8\" (UniqueName: \"kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.133350 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc\") pod \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\" (UID: \"67db18bb-5e92-47e1-b1e7-fe042c74cdd3\") " Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.160262 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:00:53 crc kubenswrapper[4736]: E0214 11:00:53.160737 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="init" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.160765 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="init" Feb 14 11:00:53 crc kubenswrapper[4736]: E0214 11:00:53.160774 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="dnsmasq-dns" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.160780 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="dnsmasq-dns" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.160943 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" containerName="dnsmasq-dns" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.161776 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.177047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8" (OuterVolumeSpecName: "kube-api-access-2w2t8") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "kube-api-access-2w2t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.237383 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238291 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ksp\" (UniqueName: \"kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238341 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238389 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238444 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238685 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.238761 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2t8\" (UniqueName: \"kubernetes.io/projected/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-kube-api-access-2w2t8\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.304423 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.340423 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.340710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.340907 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.341519 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.341612 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ksp\" (UniqueName: \"kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.341958 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.342484 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.344300 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.345170 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.368190 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.373798 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.388287 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ksp\" (UniqueName: \"kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp\") pod \"horizon-6746767d7f-fbbd6\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.395958 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config" (OuterVolumeSpecName: "config") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.404447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.413261 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67db18bb-5e92-47e1-b1e7-fe042c74cdd3" (UID: "67db18bb-5e92-47e1-b1e7-fe042c74cdd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.439354 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.451178 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.451598 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.451659 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.451716 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.451802 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67db18bb-5e92-47e1-b1e7-fe042c74cdd3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.510595 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z89bc"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.533061 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.561033 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.573018 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.620558 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2kwz6"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.627985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f8476fff7-jvqbj" event={"ID":"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d","Type":"ContainerStarted","Data":"4a261e7aff27095b7d565915df97ca74648f25b5e3936d0f221759f1a7178ff6"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.634998 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ksm2"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.643752 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" event={"ID":"81c74116-030a-4067-b13d-d38f0d77c738","Type":"ContainerStarted","Data":"ddcbcac2cce5b583dd6419952134483f44707734d5fe6d9d1acf26b9e8cab84d"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.654444 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.654435 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-z6crm" event={"ID":"67db18bb-5e92-47e1-b1e7-fe042c74cdd3","Type":"ContainerDied","Data":"a6f1e9603f2de06e660056d87500a5ddddcecf16f6f9e95c1c41bd3fbdaa0a8c"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.654911 4736 scope.go:117] "RemoveContainer" containerID="44880df19741d5d830ee8877ab5eedd4703473b18794184ae33f2992cdc53b9a" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.655980 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.658102 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerStarted","Data":"0754bd8971c04f5265e8ff3627cd335a1da6aadb2fbdbd9352c47bbd044d7c31"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.661042 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.661851 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z89bc" event={"ID":"f8f62557-0339-4cd9-884b-a3fdbc564ed0","Type":"ContainerStarted","Data":"0b5947149c62d16a2e0a05278e37c3cc0892b3834965e2da47cd77a538a18e72"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.666370 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerStarted","Data":"027640721f6d0ff37b3530b378a7be001676b46b094192d9197d31bbef5aa6c7"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.669135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ksvjl" event={"ID":"4130c06a-06b6-4500-8851-a80b42847fdb","Type":"ContainerStarted","Data":"ecdc3b6214a48cbe77a947fc680f4c11e0ae0f50c6d25ea82d0dc2ad6ff1c87e"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.669177 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ksvjl" event={"ID":"4130c06a-06b6-4500-8851-a80b42847fdb","Type":"ContainerStarted","Data":"03f8b950571c108581f65b0a32456c24b8eda9e5c81c5cc7e21b82c985239653"} Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.682913 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9bdr9"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.688757 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ksvjl" podStartSLOduration=3.688580078 podStartE2EDuration="3.688580078s" podCreationTimestamp="2026-02-14 11:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:53.684890452 +0000 UTC m=+1164.053517810" watchObservedRunningTime="2026-02-14 11:00:53.688580078 +0000 UTC m=+1164.057207436" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.696374 4736 scope.go:117] "RemoveContainer" containerID="dfdf239a4a383fbea5522806e982518f4f4633f08afb58ebc5b11db62fa1f516" Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.727580 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:53 crc kubenswrapper[4736]: I0214 11:00:53.736841 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-z6crm"] Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.288545 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:00:54 crc kubenswrapper[4736]: W0214 11:00:54.308000 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d41eaae_c5d4_4c07_9092_88977262c313.slice/crio-ff58a93796aaba62612668eeb727e59e13c00f750f7dbbb5357338ff559c4907 WatchSource:0}: Error finding container ff58a93796aaba62612668eeb727e59e13c00f750f7dbbb5357338ff559c4907: Status 404 returned error can't find the container with id ff58a93796aaba62612668eeb727e59e13c00f750f7dbbb5357338ff559c4907 Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.436068 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67db18bb-5e92-47e1-b1e7-fe042c74cdd3" path="/var/lib/kubelet/pods/67db18bb-5e92-47e1-b1e7-fe042c74cdd3/volumes" Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.727228 4736 generic.go:334] "Generic (PLEG): container finished" podID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerID="33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2" exitCode=0 Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.727328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" event={"ID":"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f","Type":"ContainerDied","Data":"33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.727379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" event={"ID":"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f","Type":"ContainerStarted","Data":"2878a9bc1269e442203edaaed531196beb223dfb712cb21dcdad88a63d2f8538"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.731730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2kwz6" event={"ID":"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c","Type":"ContainerStarted","Data":"f3b4377da7dd5855e4eff16ad5c07880f6f10d48d1fe9b2819209d15a27858e7"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.731830 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2kwz6" event={"ID":"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c","Type":"ContainerStarted","Data":"c28426f3f7130b706a8f5812c304c984b24153d1613406c4372a3eb394cbec81"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.754118 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ksm2" event={"ID":"df559ea6-6169-48d5-a47c-f765681b9a1e","Type":"ContainerStarted","Data":"22761a3e7e1fc719ee7a47cb5666eb543d9ec27d5f16509b7a79e9ffee17bb37"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.756886 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9bdr9" event={"ID":"d43521c3-8892-4a34-af06-1d93a8f50c38","Type":"ContainerStarted","Data":"8aa4fc902c564de30741071f4244f41e82c8be0146f266cf515e31ada855b9d7"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.759919 4736 generic.go:334] "Generic (PLEG): container finished" podID="81c74116-030a-4067-b13d-d38f0d77c738" containerID="0f74a241021b29b4538382f051044c906d0aca9ad12466a08fd9fed026153998" exitCode=0 Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.760142 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" event={"ID":"81c74116-030a-4067-b13d-d38f0d77c738","Type":"ContainerDied","Data":"0f74a241021b29b4538382f051044c906d0aca9ad12466a08fd9fed026153998"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.767300 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746767d7f-fbbd6" event={"ID":"6d41eaae-c5d4-4c07-9092-88977262c313","Type":"ContainerStarted","Data":"ff58a93796aaba62612668eeb727e59e13c00f750f7dbbb5357338ff559c4907"} Feb 14 11:00:54 crc kubenswrapper[4736]: I0214 11:00:54.785271 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2kwz6" podStartSLOduration=3.785255729 podStartE2EDuration="3.785255729s" podCreationTimestamp="2026-02-14 11:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:54.767015404 +0000 UTC m=+1165.135642762" watchObservedRunningTime="2026-02-14 11:00:54.785255729 +0000 UTC m=+1165.153883107" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.226851 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293599 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293686 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293734 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5vl9\" (UniqueName: \"kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293805 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293877 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.293901 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc\") pod \"81c74116-030a-4067-b13d-d38f0d77c738\" (UID: \"81c74116-030a-4067-b13d-d38f0d77c738\") " Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.311936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9" (OuterVolumeSpecName: "kube-api-access-k5vl9") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "kube-api-access-k5vl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.331278 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.332117 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.345823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.351507 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.354966 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config" (OuterVolumeSpecName: "config") pod "81c74116-030a-4067-b13d-d38f0d77c738" (UID: "81c74116-030a-4067-b13d-d38f0d77c738"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395886 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395954 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395964 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5vl9\" (UniqueName: \"kubernetes.io/projected/81c74116-030a-4067-b13d-d38f0d77c738-kube-api-access-k5vl9\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395977 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395986 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.395994 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c74116-030a-4067-b13d-d38f0d77c738-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.832614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" event={"ID":"81c74116-030a-4067-b13d-d38f0d77c738","Type":"ContainerDied","Data":"ddcbcac2cce5b583dd6419952134483f44707734d5fe6d9d1acf26b9e8cab84d"} Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.832664 4736 scope.go:117] "RemoveContainer" containerID="0f74a241021b29b4538382f051044c906d0aca9ad12466a08fd9fed026153998" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.832763 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-hpxbj" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.849927 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" event={"ID":"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f","Type":"ContainerStarted","Data":"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d"} Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.876018 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" podStartSLOduration=4.875995952 podStartE2EDuration="4.875995952s" podCreationTimestamp="2026-02-14 11:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:00:55.87004137 +0000 UTC m=+1166.238668748" watchObservedRunningTime="2026-02-14 11:00:55.875995952 +0000 UTC m=+1166.244623320" Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.916160 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:55 crc kubenswrapper[4736]: I0214 11:00:55.931124 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-hpxbj"] Feb 14 11:00:56 crc kubenswrapper[4736]: I0214 11:00:56.453308 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c74116-030a-4067-b13d-d38f0d77c738" path="/var/lib/kubelet/pods/81c74116-030a-4067-b13d-d38f0d77c738/volumes" Feb 14 11:00:56 crc kubenswrapper[4736]: I0214 11:00:56.861943 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:00:57 crc kubenswrapper[4736]: I0214 11:00:57.877400 4736 generic.go:334] "Generic (PLEG): container finished" podID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" containerID="c68572b4f95e3350f80c134eacf2c8bad1e8c242e1941b227b8aae42e6db8d8d" exitCode=0 Feb 14 11:00:57 crc kubenswrapper[4736]: I0214 11:00:57.877493 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbq8d" event={"ID":"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1","Type":"ContainerDied","Data":"c68572b4f95e3350f80c134eacf2c8bad1e8c242e1941b227b8aae42e6db8d8d"} Feb 14 11:00:58 crc kubenswrapper[4736]: I0214 11:00:58.891176 4736 generic.go:334] "Generic (PLEG): container finished" podID="4130c06a-06b6-4500-8851-a80b42847fdb" containerID="ecdc3b6214a48cbe77a947fc680f4c11e0ae0f50c6d25ea82d0dc2ad6ff1c87e" exitCode=0 Feb 14 11:00:58 crc kubenswrapper[4736]: I0214 11:00:58.891250 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ksvjl" event={"ID":"4130c06a-06b6-4500-8851-a80b42847fdb","Type":"ContainerDied","Data":"ecdc3b6214a48cbe77a947fc680f4c11e0ae0f50c6d25ea82d0dc2ad6ff1c87e"} Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.915553 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.941678 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:00:59 crc kubenswrapper[4736]: E0214 11:00:59.942110 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c74116-030a-4067-b13d-d38f0d77c738" containerName="init" Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.942135 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c74116-030a-4067-b13d-d38f0d77c738" containerName="init" Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.942324 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c74116-030a-4067-b13d-d38f0d77c738" containerName="init" Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.943084 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.950240 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 14 11:00:59 crc kubenswrapper[4736]: I0214 11:00:59.966622 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011564 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011587 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011608 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011638 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.011666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.025493 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.080881 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d96c5d8-mfqqp"] Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.083801 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.114400 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.124904 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgds\" (UniqueName: \"kubernetes.io/projected/bd003c66-fc46-445a-a88a-23a7c17f9747-kube-api-access-pkgds\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125103 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125219 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-secret-key\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125403 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125516 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125625 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd003c66-fc46-445a-a88a-23a7c17f9747-logs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125701 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-scripts\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125806 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.125961 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-config-data\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.126129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-combined-ca-bundle\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.126223 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-tls-certs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.126386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.128737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.118573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.129264 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.133137 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.136815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.176291 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.208532 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d96c5d8-mfqqp"] Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.231809 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-combined-ca-bundle\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.231860 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-tls-certs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.231913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgds\" (UniqueName: \"kubernetes.io/projected/bd003c66-fc46-445a-a88a-23a7c17f9747-kube-api-access-pkgds\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.231942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-secret-key\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.232009 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd003c66-fc46-445a-a88a-23a7c17f9747-logs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.232039 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-scripts\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.232078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-config-data\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.235736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd003c66-fc46-445a-a88a-23a7c17f9747-logs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.239141 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-secret-key\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.239681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-scripts\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.240897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv\") pod \"horizon-54b8d5f54d-bvjc4\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.243498 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd003c66-fc46-445a-a88a-23a7c17f9747-config-data\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.269369 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-combined-ca-bundle\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.271340 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.282931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd003c66-fc46-445a-a88a-23a7c17f9747-horizon-tls-certs\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.433197 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgds\" (UniqueName: \"kubernetes.io/projected/bd003c66-fc46-445a-a88a-23a7c17f9747-kube-api-access-pkgds\") pod \"horizon-78d96c5d8-mfqqp\" (UID: \"bd003c66-fc46-445a-a88a-23a7c17f9747\") " pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:00 crc kubenswrapper[4736]: I0214 11:01:00.433859 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.700510 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.767667 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.767716 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.768387 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m99lj\" (UniqueName: \"kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.768494 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.768537 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.768637 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys\") pod \"4130c06a-06b6-4500-8851-a80b42847fdb\" (UID: \"4130c06a-06b6-4500-8851-a80b42847fdb\") " Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.773375 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj" (OuterVolumeSpecName: "kube-api-access-m99lj") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "kube-api-access-m99lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.786160 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.790620 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.790715 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts" (OuterVolumeSpecName: "scripts") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.797460 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.810361 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data" (OuterVolumeSpecName: "config-data") pod "4130c06a-06b6-4500-8851-a80b42847fdb" (UID: "4130c06a-06b6-4500-8851-a80b42847fdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.871960 4736 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.871992 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.872002 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.872012 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m99lj\" (UniqueName: \"kubernetes.io/projected/4130c06a-06b6-4500-8851-a80b42847fdb-kube-api-access-m99lj\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.872022 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.872031 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130c06a-06b6-4500-8851-a80b42847fdb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.920855 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ksvjl" event={"ID":"4130c06a-06b6-4500-8851-a80b42847fdb","Type":"ContainerDied","Data":"03f8b950571c108581f65b0a32456c24b8eda9e5c81c5cc7e21b82c985239653"} Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.920903 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f8b950571c108581f65b0a32456c24b8eda9e5c81c5cc7e21b82c985239653" Feb 14 11:01:01 crc kubenswrapper[4736]: I0214 11:01:01.920988 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ksvjl" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.154911 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.225273 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.225483 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" containerID="cri-o://84e9d2785827d8ed1bfee59f606255436323d45e3c5f87729da4a707985fb9fc" gracePeriod=10 Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.445237 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.887986 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ksvjl"] Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.897794 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ksvjl"] Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.934434 4736 generic.go:334] "Generic (PLEG): container finished" podID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerID="84e9d2785827d8ed1bfee59f606255436323d45e3c5f87729da4a707985fb9fc" exitCode=0 Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.934477 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gtvzw" event={"ID":"f9d4ed58-4f61-4c47-acf8-09837e068c27","Type":"ContainerDied","Data":"84e9d2785827d8ed1bfee59f606255436323d45e3c5f87729da4a707985fb9fc"} Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.964035 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tb7hg"] Feb 14 11:01:02 crc kubenswrapper[4736]: E0214 11:01:02.964462 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4130c06a-06b6-4500-8851-a80b42847fdb" containerName="keystone-bootstrap" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.964483 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4130c06a-06b6-4500-8851-a80b42847fdb" containerName="keystone-bootstrap" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.964658 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4130c06a-06b6-4500-8851-a80b42847fdb" containerName="keystone-bootstrap" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.965195 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.973810 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.973972 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.974058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tb7hg"] Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.974109 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.974218 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 11:01:02 crc kubenswrapper[4736]: I0214 11:01:02.974324 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t8r6k" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017254 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017294 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017332 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017375 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkbz\" (UniqueName: \"kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017408 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.017534 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119604 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119620 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119655 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119674 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkbz\" (UniqueName: \"kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.119694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.126641 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.134163 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.134761 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.134969 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.138452 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.143327 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkbz\" (UniqueName: \"kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz\") pod \"keystone-bootstrap-tb7hg\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:03 crc kubenswrapper[4736]: I0214 11:01:03.284420 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:04 crc kubenswrapper[4736]: I0214 11:01:04.413819 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4130c06a-06b6-4500-8851-a80b42847fdb" path="/var/lib/kubelet/pods/4130c06a-06b6-4500-8851-a80b42847fdb/volumes" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.324146 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbq8d" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.403327 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data\") pod \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.403458 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data\") pod \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.403764 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh\") pod \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.403826 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle\") pod \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\" (UID: \"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1\") " Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.411144 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" (UID: "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.411189 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh" (OuterVolumeSpecName: "kube-api-access-xlxvh") pod "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" (UID: "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1"). InnerVolumeSpecName "kube-api-access-xlxvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.438908 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" (UID: "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.445231 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.466916 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data" (OuterVolumeSpecName: "config-data") pod "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" (UID: "7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.505509 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.505548 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-kube-api-access-xlxvh\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.505562 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.505573 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.983443 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbq8d" event={"ID":"7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1","Type":"ContainerDied","Data":"e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8"} Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.983768 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4dae5dcadb69fcfd3a72916e43460356e87070e1595343be52acfcfc4c01bf8" Feb 14 11:01:07 crc kubenswrapper[4736]: I0214 11:01:07.983541 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbq8d" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.885187 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:08 crc kubenswrapper[4736]: E0214 11:01:08.885577 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" containerName="glance-db-sync" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.885590 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" containerName="glance-db-sync" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.885793 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" containerName="glance-db-sync" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.886673 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.908210 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.934752 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.934883 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.934926 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.935049 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgls\" (UniqueName: \"kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.935104 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:08 crc kubenswrapper[4736]: I0214 11:01:08.935131 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.037360 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.037505 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.037617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.037761 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzgls\" (UniqueName: \"kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.037796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.038275 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.038288 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.039824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.039958 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.041452 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.042255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.062508 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzgls\" (UniqueName: \"kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls\") pod \"dnsmasq-dns-785d8bcb8c-m4sm5\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.213403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.756718 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.757999 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.767042 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.767983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.768279 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7n2v6" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.780059 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856416 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6w9\" (UniqueName: \"kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856502 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856540 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856568 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856595 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856661 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.856728 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.958772 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.958869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.958927 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6w9\" (UniqueName: \"kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.958980 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.959024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.959047 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.959081 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.960033 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.960447 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.961078 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.995093 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:09 crc kubenswrapper[4736]: I0214 11:01:09.999466 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.000396 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.010123 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.010866 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6w9\" (UniqueName: \"kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9\") pod \"glance-default-external-api-0\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.092365 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.096538 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.099793 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.113008 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.128962 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.162645 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163074 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ncx\" (UniqueName: \"kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163232 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163378 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163538 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163648 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.163778 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.265529 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.266656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267013 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267301 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ncx\" (UniqueName: \"kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267417 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267503 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.267968 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.268161 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.269761 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.276099 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: E0214 11:01:10.276526 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 14 11:01:10 crc kubenswrapper[4736]: E0214 11:01:10.276650 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-z89bc_openstack(f8f62557-0339-4cd9-884b-a3fdbc564ed0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.277189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: E0214 11:01:10.277808 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-z89bc" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.283465 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ncx\" (UniqueName: \"kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.291098 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.294605 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:10 crc kubenswrapper[4736]: I0214 11:01:10.424507 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:11 crc kubenswrapper[4736]: E0214 11:01:11.013929 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-z89bc" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" Feb 14 11:01:12 crc kubenswrapper[4736]: I0214 11:01:12.119870 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:12 crc kubenswrapper[4736]: I0214 11:01:12.201867 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:12 crc kubenswrapper[4736]: I0214 11:01:12.445366 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 14 11:01:12 crc kubenswrapper[4736]: I0214 11:01:12.445469 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.544085 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.544228 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh679hdbhfbhb8hc9h5cdh68fh597hb9h57h65ch5bh597h64fh5bdh55dh568h65fhc6h56fh568h88h557h64h576h579hbbh59fh9bh5bfh555q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb8j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f8476fff7-jvqbj_openstack(41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.549046 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f8476fff7-jvqbj" podUID="41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.566241 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.566379 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n545h569h7dh5f8h54fh578h68bh596hdbh5bfh5f6hf5h64fh646h5f4hf8hdfhdfhdch555hc7h64ch7ch5bdh84hfh5c7hcch546h688h5f9h674q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6746767d7f-fbbd6_openstack(6d41eaae-c5d4-4c07-9092-88977262c313): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:01:12 crc kubenswrapper[4736]: E0214 11:01:12.568259 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6746767d7f-fbbd6" podUID="6d41eaae-c5d4-4c07-9092-88977262c313" Feb 14 11:01:16 crc kubenswrapper[4736]: I0214 11:01:16.052600 4736 generic.go:334] "Generic (PLEG): container finished" podID="abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" containerID="f3b4377da7dd5855e4eff16ad5c07880f6f10d48d1fe9b2819209d15a27858e7" exitCode=0 Feb 14 11:01:16 crc kubenswrapper[4736]: I0214 11:01:16.052675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2kwz6" event={"ID":"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c","Type":"ContainerDied","Data":"f3b4377da7dd5855e4eff16ad5c07880f6f10d48d1fe9b2819209d15a27858e7"} Feb 14 11:01:17 crc kubenswrapper[4736]: I0214 11:01:17.695662 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:01:17 crc kubenswrapper[4736]: I0214 11:01:17.695842 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:01:20 crc kubenswrapper[4736]: E0214 11:01:20.704314 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 14 11:01:20 crc kubenswrapper[4736]: E0214 11:01:20.704808 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbkpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4ksm2_openstack(df559ea6-6169-48d5-a47c-f765681b9a1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:01:20 crc kubenswrapper[4736]: E0214 11:01:20.705982 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4ksm2" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.831887 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.831951 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.839167 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.844796 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.866903 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc\") pod \"f9d4ed58-4f61-4c47-acf8-09837e068c27\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867002 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb\") pod \"f9d4ed58-4f61-4c47-acf8-09837e068c27\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867156 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb8j6\" (UniqueName: \"kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6\") pod \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867246 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts\") pod \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867311 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts\") pod \"6d41eaae-c5d4-4c07-9092-88977262c313\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867357 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh2n2\" (UniqueName: \"kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2\") pod \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867443 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2ksp\" (UniqueName: \"kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp\") pod \"6d41eaae-c5d4-4c07-9092-88977262c313\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867468 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb\") pod \"f9d4ed58-4f61-4c47-acf8-09837e068c27\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data\") pod \"6d41eaae-c5d4-4c07-9092-88977262c313\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867555 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs\") pod \"6d41eaae-c5d4-4c07-9092-88977262c313\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867619 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key\") pod \"6d41eaae-c5d4-4c07-9092-88977262c313\" (UID: \"6d41eaae-c5d4-4c07-9092-88977262c313\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.867642 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle\") pod \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.868107 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs\") pod \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.868135 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config\") pod \"f9d4ed58-4f61-4c47-acf8-09837e068c27\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.868184 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data\") pod \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.872383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data" (OuterVolumeSpecName: "config-data") pod "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" (UID: "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.874112 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts" (OuterVolumeSpecName: "scripts") pod "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" (UID: "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.874262 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data" (OuterVolumeSpecName: "config-data") pod "6d41eaae-c5d4-4c07-9092-88977262c313" (UID: "6d41eaae-c5d4-4c07-9092-88977262c313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.875396 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs" (OuterVolumeSpecName: "logs") pod "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" (UID: "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.875662 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp" (OuterVolumeSpecName: "kube-api-access-l2ksp") pod "6d41eaae-c5d4-4c07-9092-88977262c313" (UID: "6d41eaae-c5d4-4c07-9092-88977262c313"). InnerVolumeSpecName "kube-api-access-l2ksp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.876131 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts" (OuterVolumeSpecName: "scripts") pod "6d41eaae-c5d4-4c07-9092-88977262c313" (UID: "6d41eaae-c5d4-4c07-9092-88977262c313"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.898913 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs" (OuterVolumeSpecName: "logs") pod "6d41eaae-c5d4-4c07-9092-88977262c313" (UID: "6d41eaae-c5d4-4c07-9092-88977262c313"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.912579 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6" (OuterVolumeSpecName: "kube-api-access-jb8j6") pod "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" (UID: "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d"). InnerVolumeSpecName "kube-api-access-jb8j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.925724 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2" (OuterVolumeSpecName: "kube-api-access-lh2n2") pod "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" (UID: "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c"). InnerVolumeSpecName "kube-api-access-lh2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.926232 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6d41eaae-c5d4-4c07-9092-88977262c313" (UID: "6d41eaae-c5d4-4c07-9092-88977262c313"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.974669 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tcsw\" (UniqueName: \"kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw\") pod \"f9d4ed58-4f61-4c47-acf8-09837e068c27\" (UID: \"f9d4ed58-4f61-4c47-acf8-09837e068c27\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.974727 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key\") pod \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\" (UID: \"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config\") pod \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\" (UID: \"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c\") " Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975368 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975385 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975398 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb8j6\" (UniqueName: \"kubernetes.io/projected/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-kube-api-access-jb8j6\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975411 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975422 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975432 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh2n2\" (UniqueName: \"kubernetes.io/projected/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-kube-api-access-lh2n2\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975444 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2ksp\" (UniqueName: \"kubernetes.io/projected/6d41eaae-c5d4-4c07-9092-88977262c313-kube-api-access-l2ksp\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975455 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d41eaae-c5d4-4c07-9092-88977262c313-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975465 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d41eaae-c5d4-4c07-9092-88977262c313-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.975474 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6d41eaae-c5d4-4c07-9092-88977262c313-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.979572 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9d4ed58-4f61-4c47-acf8-09837e068c27" (UID: "f9d4ed58-4f61-4c47-acf8-09837e068c27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.980410 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw" (OuterVolumeSpecName: "kube-api-access-4tcsw") pod "f9d4ed58-4f61-4c47-acf8-09837e068c27" (UID: "f9d4ed58-4f61-4c47-acf8-09837e068c27"). InnerVolumeSpecName "kube-api-access-4tcsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.984428 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9d4ed58-4f61-4c47-acf8-09837e068c27" (UID: "f9d4ed58-4f61-4c47-acf8-09837e068c27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.984190 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" (UID: "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:20 crc kubenswrapper[4736]: I0214 11:01:20.998672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config" (OuterVolumeSpecName: "config") pod "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" (UID: "abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:20.999964 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" (UID: "41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.011867 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config" (OuterVolumeSpecName: "config") pod "f9d4ed58-4f61-4c47-acf8-09837e068c27" (UID: "f9d4ed58-4f61-4c47-acf8-09837e068c27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.020977 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9d4ed58-4f61-4c47-acf8-09837e068c27" (UID: "f9d4ed58-4f61-4c47-acf8-09837e068c27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077057 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077103 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tcsw\" (UniqueName: \"kubernetes.io/projected/f9d4ed58-4f61-4c47-acf8-09837e068c27-kube-api-access-4tcsw\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077124 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077139 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077151 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077161 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077173 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.077186 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ed58-4f61-4c47-acf8-09837e068c27-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.106861 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746767d7f-fbbd6" event={"ID":"6d41eaae-c5d4-4c07-9092-88977262c313","Type":"ContainerDied","Data":"ff58a93796aaba62612668eeb727e59e13c00f750f7dbbb5357338ff559c4907"} Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.107016 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746767d7f-fbbd6" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.111613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gtvzw" event={"ID":"f9d4ed58-4f61-4c47-acf8-09837e068c27","Type":"ContainerDied","Data":"aa93ebd80d832df1c42fb9a9908a34b4c78ff4d44d899d7c84531bef30a0878f"} Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.111653 4736 scope.go:117] "RemoveContainer" containerID="84e9d2785827d8ed1bfee59f606255436323d45e3c5f87729da4a707985fb9fc" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.111775 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gtvzw" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.118990 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2kwz6" event={"ID":"abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c","Type":"ContainerDied","Data":"c28426f3f7130b706a8f5812c304c984b24153d1613406c4372a3eb394cbec81"} Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.119026 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28426f3f7130b706a8f5812c304c984b24153d1613406c4372a3eb394cbec81" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.119004 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2kwz6" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.120544 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f8476fff7-jvqbj" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.120775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f8476fff7-jvqbj" event={"ID":"41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d","Type":"ContainerDied","Data":"4a261e7aff27095b7d565915df97ca74648f25b5e3936d0f221759f1a7178ff6"} Feb 14 11:01:21 crc kubenswrapper[4736]: E0214 11:01:21.121322 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4ksm2" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.184752 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.200678 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6746767d7f-fbbd6"] Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.247251 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.260083 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f8476fff7-jvqbj"] Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.268074 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 11:01:21 crc kubenswrapper[4736]: I0214 11:01:21.277557 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gtvzw"] Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.131156 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186120 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.186570 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" containerName="neutron-db-sync" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186589 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" containerName="neutron-db-sync" Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.186608 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186616 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.186638 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="init" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186646 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="init" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186853 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" containerName="neutron-db-sync" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.186879 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.187981 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.208109 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.310944 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.311140 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw8rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9bdr9_openstack(d43521c3-8892-4a34-af06-1d93a8f50c38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.314879 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:01:22 crc kubenswrapper[4736]: E0214 11:01:22.316143 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9bdr9" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317081 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317133 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317174 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wbq\" (UniqueName: \"kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317200 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317223 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.317310 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.318097 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.320631 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nzfzh" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.320773 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.320826 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.320914 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.352931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.389720 4736 scope.go:117] "RemoveContainer" containerID="2c033dacd717e88056c5d5f7323364d7d76b6bf9f9812b6121565976e2d91b31" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.429389 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.429949 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.430456 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431171 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431285 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wbq\" (UniqueName: \"kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431630 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxs26\" (UniqueName: \"kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431653 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431875 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431914 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.431968 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.432037 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.432082 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.432626 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d" path="/var/lib/kubelet/pods/41ded9d9-6aa8-4ef2-a75e-0570a58b3b2d/volumes" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.433057 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d41eaae-c5d4-4c07-9092-88977262c313" path="/var/lib/kubelet/pods/6d41eaae-c5d4-4c07-9092-88977262c313/volumes" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.433379 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" path="/var/lib/kubelet/pods/f9d4ed58-4f61-4c47-acf8-09837e068c27/volumes" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.434449 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.437112 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.437128 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.446254 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gtvzw" podUID="f9d4ed58-4f61-4c47-acf8-09837e068c27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.453418 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wbq\" (UniqueName: \"kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq\") pod \"dnsmasq-dns-55f844cf75-7tmft\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.509134 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.535976 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.536021 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.536087 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxs26\" (UniqueName: \"kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.536142 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.536174 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.545398 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.551386 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.551389 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.555433 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxs26\" (UniqueName: \"kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.555502 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle\") pod \"neutron-7f99c476c6-hk87j\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.755959 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:22 crc kubenswrapper[4736]: I0214 11:01:22.917361 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.063608 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.176124 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerStarted","Data":"62cd66d80e587c5b3ee68706657070a4caf3c084aefb55a5cab8454c9516dba4"} Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.176174 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerStarted","Data":"d83ed0c2093eb543cc31edff4ef23ff64946945c062191c857d39c565671171e"} Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.176302 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5644b876d5-wp4lb" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon-log" containerID="cri-o://d83ed0c2093eb543cc31edff4ef23ff64946945c062191c857d39c565671171e" gracePeriod=30 Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.176788 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5644b876d5-wp4lb" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon" containerID="cri-o://62cd66d80e587c5b3ee68706657070a4caf3c084aefb55a5cab8454c9516dba4" gracePeriod=30 Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.189453 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" event={"ID":"139d84a5-037c-497e-a77e-23eeb4e993c7","Type":"ContainerStarted","Data":"3fc5dd37cd0b0c9d9f6181a9593436af1894377abc50e63d1ea9253e037f4c31"} Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.191634 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerStarted","Data":"f92cb76843e9644ff052bb11c175e5a9526ac0fbad72806d17069a56b766f77c"} Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.194137 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerStarted","Data":"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1"} Feb 14 11:01:23 crc kubenswrapper[4736]: E0214 11:01:23.196638 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9bdr9" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.205979 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5644b876d5-wp4lb" podStartSLOduration=5.055262576 podStartE2EDuration="32.205960851s" podCreationTimestamp="2026-02-14 11:00:51 +0000 UTC" firstStartedPulling="2026-02-14 11:00:53.577684507 +0000 UTC m=+1163.946311875" lastFinishedPulling="2026-02-14 11:01:20.728382782 +0000 UTC m=+1191.097010150" observedRunningTime="2026-02-14 11:01:23.199443353 +0000 UTC m=+1193.568070721" watchObservedRunningTime="2026-02-14 11:01:23.205960851 +0000 UTC m=+1193.574588219" Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.281724 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tb7hg"] Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.310663 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d96c5d8-mfqqp"] Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.413353 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.449328 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:01:23 crc kubenswrapper[4736]: W0214 11:01:23.651178 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8abb8167_96bf_4ea8_8613_549c33aa15e6.slice/crio-f83d4eabb1e980d1a25040255631b59172e90cd09778d81288db7dd14ea698f1 WatchSource:0}: Error finding container f83d4eabb1e980d1a25040255631b59172e90cd09778d81288db7dd14ea698f1: Status 404 returned error can't find the container with id f83d4eabb1e980d1a25040255631b59172e90cd09778d81288db7dd14ea698f1 Feb 14 11:01:23 crc kubenswrapper[4736]: I0214 11:01:23.658364 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.211839 4736 generic.go:334] "Generic (PLEG): container finished" podID="139d84a5-037c-497e-a77e-23eeb4e993c7" containerID="44c8eb6ea6548a5c0b9ae8d921494e43f7e30fe66f7ee6ee5566d6edf7054179" exitCode=0 Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.212489 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" event={"ID":"139d84a5-037c-497e-a77e-23eeb4e993c7","Type":"ContainerDied","Data":"44c8eb6ea6548a5c0b9ae8d921494e43f7e30fe66f7ee6ee5566d6edf7054179"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.237977 4736 generic.go:334] "Generic (PLEG): container finished" podID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerID="a9bfc0f8ca6f3ebc6202eefa756c843c666f415ef435d726bc5ba29d43affa18" exitCode=0 Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.238065 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" event={"ID":"fba91db3-6b2e-40fa-87dd-9211f5976bec","Type":"ContainerDied","Data":"a9bfc0f8ca6f3ebc6202eefa756c843c666f415ef435d726bc5ba29d43affa18"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.238091 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" event={"ID":"fba91db3-6b2e-40fa-87dd-9211f5976bec","Type":"ContainerStarted","Data":"982e16a5d17dade46bfe78ff6ee5618251eacfafe0291b41518e206dc0ae1596"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.281171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tb7hg" event={"ID":"3474d549-a236-46a6-ad9a-46186dca5831","Type":"ContainerStarted","Data":"bfd7e43aa7874187cacc4cc946c8313c56433ba8ee5357eb660526803a99698c"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.281211 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tb7hg" event={"ID":"3474d549-a236-46a6-ad9a-46186dca5831","Type":"ContainerStarted","Data":"687b13a3217819d80bb9ddcfc6923e9d4504e0ae2b0f079ee0a132fb9f8708ff"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.336106 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tb7hg" podStartSLOduration=22.336088694 podStartE2EDuration="22.336088694s" podCreationTimestamp="2026-02-14 11:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:24.325127267 +0000 UTC m=+1194.693754635" watchObservedRunningTime="2026-02-14 11:01:24.336088694 +0000 UTC m=+1194.704716062" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.340928 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.361023 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerStarted","Data":"e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.361307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerStarted","Data":"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.388393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerStarted","Data":"f83d4eabb1e980d1a25040255631b59172e90cd09778d81288db7dd14ea698f1"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.389609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerStarted","Data":"c14fd164061da6837f428c4fdd602b96094effd39f3829163bbf9e1dc946e519"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.390646 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerStarted","Data":"bc79374fb35a319ef6a5e3623774f2c982da9709545ca84d1636e7156687a9c3"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.390670 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerStarted","Data":"45767efb0a1f71e1dc5e84fb61fbf131610ca8ab274d599293bd024af937529d"} Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.421105 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54b8d5f54d-bvjc4" podStartSLOduration=25.421091656 podStartE2EDuration="25.421091656s" podCreationTimestamp="2026-02-14 11:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:24.402121077 +0000 UTC m=+1194.770748445" watchObservedRunningTime="2026-02-14 11:01:24.421091656 +0000 UTC m=+1194.789719024" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.700712 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.707533 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.711791 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.715455 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.726488 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.808676 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.808793 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.808953 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.809083 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cgvp\" (UniqueName: \"kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.809133 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.809151 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.809170 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.910893 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911123 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911239 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cgvp\" (UniqueName: \"kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911461 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.911572 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.918187 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.923542 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.930485 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.932985 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.934251 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.936132 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cgvp\" (UniqueName: \"kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:24 crc kubenswrapper[4736]: I0214 11:01:24.936293 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs\") pod \"neutron-77f6fd57bc-nlqb5\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.166729 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.381106 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.420941 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzgls\" (UniqueName: \"kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.421032 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.421136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.421159 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.421191 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.421237 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb\") pod \"139d84a5-037c-497e-a77e-23eeb4e993c7\" (UID: \"139d84a5-037c-497e-a77e-23eeb4e993c7\") " Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.422661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerStarted","Data":"a36c8c9763fae6c874d5d0eee99a5aad72851d7a8b58cc30a14a0fceff87ff1e"} Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.444595 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerStarted","Data":"9378f881e3152de6b1e5af6d6370a7e644caf25d78156655790eb536c01e470e"} Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.448616 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls" (OuterVolumeSpecName: "kube-api-access-rzgls") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "kube-api-access-rzgls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.463583 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerStarted","Data":"04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b"} Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.510106 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.513500 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" event={"ID":"139d84a5-037c-497e-a77e-23eeb4e993c7","Type":"ContainerDied","Data":"3fc5dd37cd0b0c9d9f6181a9593436af1894377abc50e63d1ea9253e037f4c31"} Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.513548 4736 scope.go:117] "RemoveContainer" containerID="44c8eb6ea6548a5c0b9ae8d921494e43f7e30fe66f7ee6ee5566d6edf7054179" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.513676 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-m4sm5" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.543470 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzgls\" (UniqueName: \"kubernetes.io/projected/139d84a5-037c-497e-a77e-23eeb4e993c7-kube-api-access-rzgls\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.543505 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.548370 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerStarted","Data":"3527ccb536661d475b9d882531c095b83134fbe1905f6c450532ec8e7d30574f"} Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.559550 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d96c5d8-mfqqp" podStartSLOduration=25.559532469 podStartE2EDuration="25.559532469s" podCreationTimestamp="2026-02-14 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:25.527003377 +0000 UTC m=+1195.895630765" watchObservedRunningTime="2026-02-14 11:01:25.559532469 +0000 UTC m=+1195.928159837" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.574200 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.614350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.639639 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.657891 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.657923 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.657932 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.707205 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config" (OuterVolumeSpecName: "config") pod "139d84a5-037c-497e-a77e-23eeb4e993c7" (UID: "139d84a5-037c-497e-a77e-23eeb4e993c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:25 crc kubenswrapper[4736]: I0214 11:01:25.759193 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139d84a5-037c-497e-a77e-23eeb4e993c7-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.076066 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.080425 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-m4sm5"] Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.124665 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.441695 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139d84a5-037c-497e-a77e-23eeb4e993c7" path="/var/lib/kubelet/pods/139d84a5-037c-497e-a77e-23eeb4e993c7/volumes" Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.564907 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerStarted","Data":"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a"} Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.570009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerStarted","Data":"69b14602eb8cc9ee1f5cb103ba79444bff1af9d6df7f63ef16e65394c8e79c69"} Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.571032 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.572164 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerStarted","Data":"290dcd17cab0995c23cf6a6b09cdf87096156d00c073da69d9ce2d36114c41d3"} Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.581325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" event={"ID":"fba91db3-6b2e-40fa-87dd-9211f5976bec","Type":"ContainerStarted","Data":"adb59dd9fd022260cdba7dc19c6e034ca0a99277c8227c9297f67a6c41fdb2d5"} Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.581429 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.586711 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f99c476c6-hk87j" podStartSLOduration=4.58670245 podStartE2EDuration="4.58670245s" podCreationTimestamp="2026-02-14 11:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:26.585344801 +0000 UTC m=+1196.953972169" watchObservedRunningTime="2026-02-14 11:01:26.58670245 +0000 UTC m=+1196.955329818" Feb 14 11:01:26 crc kubenswrapper[4736]: I0214 11:01:26.620488 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" podStartSLOduration=4.620470148 podStartE2EDuration="4.620470148s" podCreationTimestamp="2026-02-14 11:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:26.616062131 +0000 UTC m=+1196.984689499" watchObservedRunningTime="2026-02-14 11:01:26.620470148 +0000 UTC m=+1196.989097506" Feb 14 11:01:30 crc kubenswrapper[4736]: I0214 11:01:30.271823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:30 crc kubenswrapper[4736]: I0214 11:01:30.272218 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:01:30 crc kubenswrapper[4736]: I0214 11:01:30.435080 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:30 crc kubenswrapper[4736]: I0214 11:01:30.435501 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.624251 4736 generic.go:334] "Generic (PLEG): container finished" podID="3474d549-a236-46a6-ad9a-46186dca5831" containerID="bfd7e43aa7874187cacc4cc946c8313c56433ba8ee5357eb660526803a99698c" exitCode=0 Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.624325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tb7hg" event={"ID":"3474d549-a236-46a6-ad9a-46186dca5831","Type":"ContainerDied","Data":"bfd7e43aa7874187cacc4cc946c8313c56433ba8ee5357eb660526803a99698c"} Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.625518 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerStarted","Data":"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a"} Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.627160 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerStarted","Data":"77b2cf073d9f571cc58150341656490e7da31fdf99782db3de73d261961298b9"} Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.627314 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-log" containerID="cri-o://9378f881e3152de6b1e5af6d6370a7e644caf25d78156655790eb536c01e470e" gracePeriod=30 Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.627459 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-httpd" containerID="cri-o://77b2cf073d9f571cc58150341656490e7da31fdf99782db3de73d261961298b9" gracePeriod=30 Feb 14 11:01:31 crc kubenswrapper[4736]: I0214 11:01:31.640312 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.527272 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.652677 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z89bc" event={"ID":"f8f62557-0339-4cd9-884b-a3fdbc564ed0","Type":"ContainerStarted","Data":"7c3ac7afc2de52097134e8b1842711fd77186d0d2fc2ec237c1207476458278f"} Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.657851 4736 generic.go:334] "Generic (PLEG): container finished" podID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerID="77b2cf073d9f571cc58150341656490e7da31fdf99782db3de73d261961298b9" exitCode=143 Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.657893 4736 generic.go:334] "Generic (PLEG): container finished" podID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerID="9378f881e3152de6b1e5af6d6370a7e644caf25d78156655790eb536c01e470e" exitCode=143 Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.657977 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerDied","Data":"77b2cf073d9f571cc58150341656490e7da31fdf99782db3de73d261961298b9"} Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.658005 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerDied","Data":"9378f881e3152de6b1e5af6d6370a7e644caf25d78156655790eb536c01e470e"} Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.664644 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerStarted","Data":"07e1681a68e5e0fff31272879d764409fed468fa7ef7cde47aa3cf5cadb8c5d7"} Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.695151 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.695134083 podStartE2EDuration="24.695134083s" podCreationTimestamp="2026-02-14 11:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:31.687616432 +0000 UTC m=+1202.056243800" watchObservedRunningTime="2026-02-14 11:01:32.695134083 +0000 UTC m=+1203.063761441" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.707631 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.777636 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-z89bc" podStartSLOduration=3.439310981 podStartE2EDuration="41.777616552s" podCreationTimestamp="2026-02-14 11:00:51 +0000 UTC" firstStartedPulling="2026-02-14 11:00:53.582961779 +0000 UTC m=+1163.951589137" lastFinishedPulling="2026-02-14 11:01:31.92126734 +0000 UTC m=+1202.289894708" observedRunningTime="2026-02-14 11:01:32.73299894 +0000 UTC m=+1203.101626308" watchObservedRunningTime="2026-02-14 11:01:32.777616552 +0000 UTC m=+1203.146243920" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.798971 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799122 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799197 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799263 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799296 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799344 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk6w9\" (UniqueName: \"kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9\") pod \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\" (UID: \"d25fc1d5-2a1a-4c43-9752-e7fac6623591\") " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.799866 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.800317 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs" (OuterVolumeSpecName: "logs") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.817604 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts" (OuterVolumeSpecName: "scripts") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.823498 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.823717 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9" (OuterVolumeSpecName: "kube-api-access-vk6w9") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "kube-api-access-vk6w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.827733 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.828027 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="dnsmasq-dns" containerID="cri-o://eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d" gracePeriod=10 Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.839717 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905154 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905197 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905208 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905217 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d25fc1d5-2a1a-4c43-9752-e7fac6623591-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905228 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk6w9\" (UniqueName: \"kubernetes.io/projected/d25fc1d5-2a1a-4c43-9752-e7fac6623591-kube-api-access-vk6w9\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.905236 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.934342 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data" (OuterVolumeSpecName: "config-data") pod "d25fc1d5-2a1a-4c43-9752-e7fac6623591" (UID: "d25fc1d5-2a1a-4c43-9752-e7fac6623591"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:32 crc kubenswrapper[4736]: I0214 11:01:32.969653 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.007856 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d25fc1d5-2a1a-4c43-9752-e7fac6623591-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.007885 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.377538 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.426870 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.426924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkbz\" (UniqueName: \"kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.426989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.427059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.427091 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.427152 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data\") pod \"3474d549-a236-46a6-ad9a-46186dca5831\" (UID: \"3474d549-a236-46a6-ad9a-46186dca5831\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.438441 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts" (OuterVolumeSpecName: "scripts") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.443212 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.447698 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz" (OuterVolumeSpecName: "kube-api-access-wdkbz") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "kube-api-access-wdkbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.454896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.469497 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data" (OuterVolumeSpecName: "config-data") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.486736 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3474d549-a236-46a6-ad9a-46186dca5831" (UID: "3474d549-a236-46a6-ad9a-46186dca5831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532416 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532451 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkbz\" (UniqueName: \"kubernetes.io/projected/3474d549-a236-46a6-ad9a-46186dca5831-kube-api-access-wdkbz\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532464 4736 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532474 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532483 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.532492 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3474d549-a236-46a6-ad9a-46186dca5831-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.569913 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.634724 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.634843 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.635046 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.635067 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.635085 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.635122 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crng7\" (UniqueName: \"kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7\") pod \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\" (UID: \"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f\") " Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.641526 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7" (OuterVolumeSpecName: "kube-api-access-crng7") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "kube-api-access-crng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.692012 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerStarted","Data":"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.692162 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-log" containerID="cri-o://eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" gracePeriod=30 Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.692632 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-httpd" containerID="cri-o://53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" gracePeriod=30 Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739028 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crng7\" (UniqueName: \"kubernetes.io/projected/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-kube-api-access-crng7\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739095 4736 generic.go:334] "Generic (PLEG): container finished" podID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerID="eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d" exitCode=0 Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739202 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739207 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" event={"ID":"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f","Type":"ContainerDied","Data":"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739236 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zm8d5" event={"ID":"a27eb1e9-eeb3-4138-bffb-43d69c6ab74f","Type":"ContainerDied","Data":"2878a9bc1269e442203edaaed531196beb223dfb712cb21dcdad88a63d2f8538"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.739254 4736 scope.go:117] "RemoveContainer" containerID="eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.753718 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=24.753693593 podStartE2EDuration="24.753693593s" podCreationTimestamp="2026-02-14 11:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:33.726166556 +0000 UTC m=+1204.094793924" watchObservedRunningTime="2026-02-14 11:01:33.753693593 +0000 UTC m=+1204.122320951" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.760520 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d25fc1d5-2a1a-4c43-9752-e7fac6623591","Type":"ContainerDied","Data":"c14fd164061da6837f428c4fdd602b96094effd39f3829163bbf9e1dc946e519"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.760635 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.787200 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerStarted","Data":"22c5d0275efb993b2fcf4238f8b3c5bc3be2100cd0a6bb245f216c7f2dc32105"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.787321 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.814899 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tb7hg" event={"ID":"3474d549-a236-46a6-ad9a-46186dca5831","Type":"ContainerDied","Data":"687b13a3217819d80bb9ddcfc6923e9d4504e0ae2b0f079ee0a132fb9f8708ff"} Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.814938 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="687b13a3217819d80bb9ddcfc6923e9d4504e0ae2b0f079ee0a132fb9f8708ff" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.814993 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tb7hg" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.815313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.819341 4736 scope.go:117] "RemoveContainer" containerID="33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.844241 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.877703 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7677d9df65-nl5rx"] Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.878201 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="init" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.892828 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="init" Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.892896 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-httpd" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.892905 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-httpd" Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.892920 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3474d549-a236-46a6-ad9a-46186dca5831" containerName="keystone-bootstrap" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.892929 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3474d549-a236-46a6-ad9a-46186dca5831" containerName="keystone-bootstrap" Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.892945 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139d84a5-037c-497e-a77e-23eeb4e993c7" containerName="init" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.892951 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="139d84a5-037c-497e-a77e-23eeb4e993c7" containerName="init" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.881018 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.893014 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="dnsmasq-dns" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893021 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="dnsmasq-dns" Feb 14 11:01:33 crc kubenswrapper[4736]: E0214 11:01:33.893041 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-log" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893048 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-log" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893314 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-log" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893340 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" containerName="glance-httpd" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893351 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3474d549-a236-46a6-ad9a-46186dca5831" containerName="keystone-bootstrap" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893366 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="139d84a5-037c-497e-a77e-23eeb4e993c7" containerName="init" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893393 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" containerName="dnsmasq-dns" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.893956 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906257 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906460 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t8r6k" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906610 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906784 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906890 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.906987 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.907472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7677d9df65-nl5rx"] Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.921189 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config" (OuterVolumeSpecName: "config") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.924862 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945200 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-config-data\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945239 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-scripts\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945265 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-combined-ca-bundle\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcv75\" (UniqueName: \"kubernetes.io/projected/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-kube-api-access-dcv75\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-public-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-fernet-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945373 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-credential-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945400 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-internal-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945446 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.945455 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.972087 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.982711 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77f6fd57bc-nlqb5" podStartSLOduration=9.982688656 podStartE2EDuration="9.982688656s" podCreationTimestamp="2026-02-14 11:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:33.829253252 +0000 UTC m=+1204.197880640" watchObservedRunningTime="2026-02-14 11:01:33.982688656 +0000 UTC m=+1204.351316024" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.984967 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:33 crc kubenswrapper[4736]: I0214 11:01:33.993072 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" (UID: "a27eb1e9-eeb3-4138-bffb-43d69c6ab74f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.001902 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.003448 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.005199 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.006651 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.007562 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.042646 4736 scope.go:117] "RemoveContainer" containerID="eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d" Feb 14 11:01:34 crc kubenswrapper[4736]: E0214 11:01:34.045106 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d\": container with ID starting with eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d not found: ID does not exist" containerID="eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.045147 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d"} err="failed to get container status \"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d\": rpc error: code = NotFound desc = could not find container \"eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d\": container with ID starting with eaafb5196d475c4f00e259a3b2664294c5cf49050de7691f895daca871bb277d not found: ID does not exist" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.045169 4736 scope.go:117] "RemoveContainer" containerID="33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.046879 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.046921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-fernet-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.046939 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.046959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-credential-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047029 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047083 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-internal-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047123 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047153 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktcf\" (UniqueName: \"kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047183 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047279 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-config-data\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-scripts\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047365 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-combined-ca-bundle\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047383 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcv75\" (UniqueName: \"kubernetes.io/projected/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-kube-api-access-dcv75\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047437 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047486 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-public-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047539 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.047550 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.052273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-public-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: E0214 11:01:34.052401 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2\": container with ID starting with 33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2 not found: ID does not exist" containerID="33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.052428 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2"} err="failed to get container status \"33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2\": rpc error: code = NotFound desc = could not find container \"33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2\": container with ID starting with 33bcfe9efee3c73a11911a20293bdcbdacfe60fb843b4439f8942ef24bbdb1d2 not found: ID does not exist" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.052453 4736 scope.go:117] "RemoveContainer" containerID="77b2cf073d9f571cc58150341656490e7da31fdf99782db3de73d261961298b9" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.057095 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-credential-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.060588 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-fernet-keys\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.063874 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-scripts\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.065237 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-config-data\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.078559 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-combined-ca-bundle\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.091993 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-internal-tls-certs\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.092121 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcv75\" (UniqueName: \"kubernetes.io/projected/6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1-kube-api-access-dcv75\") pod \"keystone-7677d9df65-nl5rx\" (UID: \"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1\") " pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.129414 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149029 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149111 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149135 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149216 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cktcf\" (UniqueName: \"kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149260 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.149329 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.152546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.153450 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.154337 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.154473 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.157589 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.173593 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.176033 4736 scope.go:117] "RemoveContainer" containerID="9378f881e3152de6b1e5af6d6370a7e644caf25d78156655790eb536c01e470e" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.183030 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cktcf\" (UniqueName: \"kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.183419 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.183646 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.196521 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zm8d5"] Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.299283 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.429266 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27eb1e9-eeb3-4138-bffb-43d69c6ab74f" path="/var/lib/kubelet/pods/a27eb1e9-eeb3-4138-bffb-43d69c6ab74f/volumes" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.430482 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25fc1d5-2a1a-4c43-9752-e7fac6623591" path="/var/lib/kubelet/pods/d25fc1d5-2a1a-4c43-9752-e7fac6623591/volumes" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.440191 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.642205 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805545 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805686 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805764 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68ncx\" (UniqueName: \"kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805869 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.805925 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.806008 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs\") pod \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\" (UID: \"6d0581c3-9ebb-4107-bb47-4c33f04a99b5\") " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.807298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs" (OuterVolumeSpecName: "logs") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.807358 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.829889 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx" (OuterVolumeSpecName: "kube-api-access-68ncx") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "kube-api-access-68ncx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.832115 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.850376 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts" (OuterVolumeSpecName: "scripts") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.873368 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7677d9df65-nl5rx"] Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.903907 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.917949 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data" (OuterVolumeSpecName: "config-data") pod "6d0581c3-9ebb-4107-bb47-4c33f04a99b5" (UID: "6d0581c3-9ebb-4107-bb47-4c33f04a99b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920061 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920112 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920129 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68ncx\" (UniqueName: \"kubernetes.io/projected/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-kube-api-access-68ncx\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920142 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920154 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920164 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d0581c3-9ebb-4107-bb47-4c33f04a99b5-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.920203 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.930946 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ksm2" event={"ID":"df559ea6-6169-48d5-a47c-f765681b9a1e","Type":"ContainerStarted","Data":"9c4d94499295ca775b95c97766ec949b48b19959dd46670da5fa8d1f9152bb44"} Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.949279 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4ksm2" podStartSLOduration=3.48543795 podStartE2EDuration="43.949263852s" podCreationTimestamp="2026-02-14 11:00:51 +0000 UTC" firstStartedPulling="2026-02-14 11:00:53.627901072 +0000 UTC m=+1163.996528440" lastFinishedPulling="2026-02-14 11:01:34.091726974 +0000 UTC m=+1204.460354342" observedRunningTime="2026-02-14 11:01:34.947651895 +0000 UTC m=+1205.316279263" watchObservedRunningTime="2026-02-14 11:01:34.949263852 +0000 UTC m=+1205.317891220" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959316 4736 generic.go:334] "Generic (PLEG): container finished" podID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerID="53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" exitCode=143 Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959341 4736 generic.go:334] "Generic (PLEG): container finished" podID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerID="eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" exitCode=143 Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959381 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerDied","Data":"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4"} Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerDied","Data":"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a"} Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d0581c3-9ebb-4107-bb47-4c33f04a99b5","Type":"ContainerDied","Data":"a36c8c9763fae6c874d5d0eee99a5aad72851d7a8b58cc30a14a0fceff87ff1e"} Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959428 4736 scope.go:117] "RemoveContainer" containerID="53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" Feb 14 11:01:34 crc kubenswrapper[4736]: I0214 11:01:34.959512 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.035281 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.122879 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.166916 4736 scope.go:117] "RemoveContainer" containerID="eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.168844 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.176827 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.209291 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:35 crc kubenswrapper[4736]: E0214 11:01:35.209673 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-httpd" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.209688 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-httpd" Feb 14 11:01:35 crc kubenswrapper[4736]: E0214 11:01:35.209704 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-log" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.209710 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-log" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.211091 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-log" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.211127 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" containerName="glance-httpd" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.212192 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.220919 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.231360 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.232037 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.306950 4736 scope.go:117] "RemoveContainer" containerID="53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" Feb 14 11:01:35 crc kubenswrapper[4736]: E0214 11:01:35.307810 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4\": container with ID starting with 53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4 not found: ID does not exist" containerID="53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.307854 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4"} err="failed to get container status \"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4\": rpc error: code = NotFound desc = could not find container \"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4\": container with ID starting with 53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4 not found: ID does not exist" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.307882 4736 scope.go:117] "RemoveContainer" containerID="eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" Feb 14 11:01:35 crc kubenswrapper[4736]: E0214 11:01:35.310923 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a\": container with ID starting with eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a not found: ID does not exist" containerID="eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.310962 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a"} err="failed to get container status \"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a\": rpc error: code = NotFound desc = could not find container \"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a\": container with ID starting with eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a not found: ID does not exist" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.310999 4736 scope.go:117] "RemoveContainer" containerID="53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.317955 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4"} err="failed to get container status \"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4\": rpc error: code = NotFound desc = could not find container \"53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4\": container with ID starting with 53683018e62608275d9d4fccc61d56dcfd59af72c00868ae37d9d200279a1fa4 not found: ID does not exist" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.317994 4736 scope.go:117] "RemoveContainer" containerID="eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.320385 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a"} err="failed to get container status \"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a\": rpc error: code = NotFound desc = could not find container \"eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a\": container with ID starting with eef69ff6f922ee2c2ff66a2a02fbe3202a1b38fea2fbb12b68e2c89e11aa347a not found: ID does not exist" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325620 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325658 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fp9x\" (UniqueName: \"kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325684 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325723 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325774 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325797 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.325883 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.362290 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427288 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427372 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427496 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427526 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fp9x\" (UniqueName: \"kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427575 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427633 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427667 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.427717 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.428415 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.428821 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.430288 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.461457 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.461648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.461905 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.463158 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.465434 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fp9x\" (UniqueName: \"kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.509802 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.541082 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.983343 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerStarted","Data":"24100200c35f919964960605338a664a6a473211349fe1a6efdf29428d98f01f"} Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.990404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7677d9df65-nl5rx" event={"ID":"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1","Type":"ContainerStarted","Data":"204df5408832ddb0b381ca63511d04e062101820674ca379db010b5cc4617dea"} Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.990435 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7677d9df65-nl5rx" event={"ID":"6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1","Type":"ContainerStarted","Data":"de864e48331ae88ec59664fdfcccc949ca39ac2b679e27ec15c1e692b3c2584c"} Feb 14 11:01:35 crc kubenswrapper[4736]: I0214 11:01:35.991608 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:01:36 crc kubenswrapper[4736]: I0214 11:01:36.034300 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7677d9df65-nl5rx" podStartSLOduration=3.034284519 podStartE2EDuration="3.034284519s" podCreationTimestamp="2026-02-14 11:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:36.030727235 +0000 UTC m=+1206.399354603" watchObservedRunningTime="2026-02-14 11:01:36.034284519 +0000 UTC m=+1206.402911887" Feb 14 11:01:36 crc kubenswrapper[4736]: I0214 11:01:36.372774 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:01:36 crc kubenswrapper[4736]: I0214 11:01:36.438950 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0581c3-9ebb-4107-bb47-4c33f04a99b5" path="/var/lib/kubelet/pods/6d0581c3-9ebb-4107-bb47-4c33f04a99b5/volumes" Feb 14 11:01:37 crc kubenswrapper[4736]: I0214 11:01:37.024982 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerStarted","Data":"b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2"} Feb 14 11:01:37 crc kubenswrapper[4736]: I0214 11:01:37.029950 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerStarted","Data":"dec3e88f1ffd1b043596fe85d3df8d54bc091d65440d632df5d8e5e8d2a8c702"} Feb 14 11:01:37 crc kubenswrapper[4736]: I0214 11:01:37.038613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9bdr9" event={"ID":"d43521c3-8892-4a34-af06-1d93a8f50c38","Type":"ContainerStarted","Data":"953a8f6acf6c555b2a3d91b7a06ac13470b5c5e20e72749e48b36b5c8486ef35"} Feb 14 11:01:37 crc kubenswrapper[4736]: I0214 11:01:37.058372 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9bdr9" podStartSLOduration=5.735419761 podStartE2EDuration="47.058357149s" podCreationTimestamp="2026-02-14 11:00:50 +0000 UTC" firstStartedPulling="2026-02-14 11:00:53.641948906 +0000 UTC m=+1164.010576274" lastFinishedPulling="2026-02-14 11:01:34.964886294 +0000 UTC m=+1205.333513662" observedRunningTime="2026-02-14 11:01:37.057283428 +0000 UTC m=+1207.425910796" watchObservedRunningTime="2026-02-14 11:01:37.058357149 +0000 UTC m=+1207.426984517" Feb 14 11:01:38 crc kubenswrapper[4736]: I0214 11:01:38.049820 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerStarted","Data":"d7be487436ca3c68565c3c3b81b579e039fd40f7de365c45fc15c582f05ef6fd"} Feb 14 11:01:38 crc kubenswrapper[4736]: I0214 11:01:38.055599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerStarted","Data":"286886288e8293a9f9f716288452cc220414a81c7aec0336bf085ca9099af496"} Feb 14 11:01:38 crc kubenswrapper[4736]: I0214 11:01:38.078297 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.07828148 podStartE2EDuration="5.07828148s" podCreationTimestamp="2026-02-14 11:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:38.071086492 +0000 UTC m=+1208.439713870" watchObservedRunningTime="2026-02-14 11:01:38.07828148 +0000 UTC m=+1208.446908848" Feb 14 11:01:39 crc kubenswrapper[4736]: I0214 11:01:39.070233 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerStarted","Data":"59fe292ef949872f6b06cf85ed7bb72db8b4e9c599d63fb6c70aa86f92eaa601"} Feb 14 11:01:39 crc kubenswrapper[4736]: I0214 11:01:39.076190 4736 generic.go:334] "Generic (PLEG): container finished" podID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" containerID="7c3ac7afc2de52097134e8b1842711fd77186d0d2fc2ec237c1207476458278f" exitCode=0 Feb 14 11:01:39 crc kubenswrapper[4736]: I0214 11:01:39.076318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z89bc" event={"ID":"f8f62557-0339-4cd9-884b-a3fdbc564ed0","Type":"ContainerDied","Data":"7c3ac7afc2de52097134e8b1842711fd77186d0d2fc2ec237c1207476458278f"} Feb 14 11:01:39 crc kubenswrapper[4736]: I0214 11:01:39.125081 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.125064069 podStartE2EDuration="4.125064069s" podCreationTimestamp="2026-02-14 11:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:39.105205634 +0000 UTC m=+1209.473833012" watchObservedRunningTime="2026-02-14 11:01:39.125064069 +0000 UTC m=+1209.493691437" Feb 14 11:01:40 crc kubenswrapper[4736]: I0214 11:01:40.279316 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:01:40 crc kubenswrapper[4736]: I0214 11:01:40.446483 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:01:41 crc kubenswrapper[4736]: I0214 11:01:41.097287 4736 generic.go:334] "Generic (PLEG): container finished" podID="df559ea6-6169-48d5-a47c-f765681b9a1e" containerID="9c4d94499295ca775b95c97766ec949b48b19959dd46670da5fa8d1f9152bb44" exitCode=0 Feb 14 11:01:41 crc kubenswrapper[4736]: I0214 11:01:41.097376 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ksm2" event={"ID":"df559ea6-6169-48d5-a47c-f765681b9a1e","Type":"ContainerDied","Data":"9c4d94499295ca775b95c97766ec949b48b19959dd46670da5fa8d1f9152bb44"} Feb 14 11:01:43 crc kubenswrapper[4736]: I0214 11:01:43.897473 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z89bc" Feb 14 11:01:43 crc kubenswrapper[4736]: I0214 11:01:43.900285 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.019219 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data\") pod \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.019879 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs\") pod \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.020234 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs" (OuterVolumeSpecName: "logs") pod "f8f62557-0339-4cd9-884b-a3fdbc564ed0" (UID: "f8f62557-0339-4cd9-884b-a3fdbc564ed0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.020507 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts\") pod \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.020607 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle\") pod \"df559ea6-6169-48d5-a47c-f765681b9a1e\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.020924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkpm\" (UniqueName: \"kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm\") pod \"df559ea6-6169-48d5-a47c-f765681b9a1e\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.021288 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrscn\" (UniqueName: \"kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn\") pod \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.022038 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle\") pod \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\" (UID: \"f8f62557-0339-4cd9-884b-a3fdbc564ed0\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.022152 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data\") pod \"df559ea6-6169-48d5-a47c-f765681b9a1e\" (UID: \"df559ea6-6169-48d5-a47c-f765681b9a1e\") " Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.022825 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8f62557-0339-4cd9-884b-a3fdbc564ed0-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.033309 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts" (OuterVolumeSpecName: "scripts") pod "f8f62557-0339-4cd9-884b-a3fdbc564ed0" (UID: "f8f62557-0339-4cd9-884b-a3fdbc564ed0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.040952 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm" (OuterVolumeSpecName: "kube-api-access-dbkpm") pod "df559ea6-6169-48d5-a47c-f765681b9a1e" (UID: "df559ea6-6169-48d5-a47c-f765681b9a1e"). InnerVolumeSpecName "kube-api-access-dbkpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.040962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df559ea6-6169-48d5-a47c-f765681b9a1e" (UID: "df559ea6-6169-48d5-a47c-f765681b9a1e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.041024 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn" (OuterVolumeSpecName: "kube-api-access-hrscn") pod "f8f62557-0339-4cd9-884b-a3fdbc564ed0" (UID: "f8f62557-0339-4cd9-884b-a3fdbc564ed0"). InnerVolumeSpecName "kube-api-access-hrscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.068986 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8f62557-0339-4cd9-884b-a3fdbc564ed0" (UID: "f8f62557-0339-4cd9-884b-a3fdbc564ed0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.069341 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df559ea6-6169-48d5-a47c-f765681b9a1e" (UID: "df559ea6-6169-48d5-a47c-f765681b9a1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.085290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data" (OuterVolumeSpecName: "config-data") pod "f8f62557-0339-4cd9-884b-a3fdbc564ed0" (UID: "f8f62557-0339-4cd9-884b-a3fdbc564ed0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.123208 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ksm2" event={"ID":"df559ea6-6169-48d5-a47c-f765681b9a1e","Type":"ContainerDied","Data":"22761a3e7e1fc719ee7a47cb5666eb543d9ec27d5f16509b7a79e9ffee17bb37"} Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.123263 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22761a3e7e1fc719ee7a47cb5666eb543d9ec27d5f16509b7a79e9ffee17bb37" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.123349 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ksm2" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125054 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125078 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125089 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125108 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df559ea6-6169-48d5-a47c-f765681b9a1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125117 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkpm\" (UniqueName: \"kubernetes.io/projected/df559ea6-6169-48d5-a47c-f765681b9a1e-kube-api-access-dbkpm\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125128 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrscn\" (UniqueName: \"kubernetes.io/projected/f8f62557-0339-4cd9-884b-a3fdbc564ed0-kube-api-access-hrscn\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.125136 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f62557-0339-4cd9-884b-a3fdbc564ed0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.126971 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerStarted","Data":"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d"} Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.128838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z89bc" event={"ID":"f8f62557-0339-4cd9-884b-a3fdbc564ed0","Type":"ContainerDied","Data":"0b5947149c62d16a2e0a05278e37c3cc0892b3834965e2da47cd77a538a18e72"} Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.128938 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b5947149c62d16a2e0a05278e37c3cc0892b3834965e2da47cd77a538a18e72" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.129061 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z89bc" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.441062 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.441104 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.484727 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 11:01:44 crc kubenswrapper[4736]: I0214 11:01:44.504972 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.126173 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-557678d96b-tqmtc"] Feb 14 11:01:45 crc kubenswrapper[4736]: E0214 11:01:45.126528 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" containerName="barbican-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.126541 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" containerName="barbican-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: E0214 11:01:45.126553 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" containerName="placement-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.126559 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" containerName="placement-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.126703 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" containerName="barbican-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.126728 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" containerName="placement-db-sync" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.127667 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.132983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.133269 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.133412 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.133508 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.133603 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cshh5" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.209432 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557678d96b-tqmtc"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.230825 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9bdr9" event={"ID":"d43521c3-8892-4a34-af06-1d93a8f50c38","Type":"ContainerDied","Data":"953a8f6acf6c555b2a3d91b7a06ac13470b5c5e20e72749e48b36b5c8486ef35"} Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.213828 4736 generic.go:334] "Generic (PLEG): container finished" podID="d43521c3-8892-4a34-af06-1d93a8f50c38" containerID="953a8f6acf6c555b2a3d91b7a06ac13470b5c5e20e72749e48b36b5c8486ef35" exitCode=0 Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.232726 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.232836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252035 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlhc\" (UniqueName: \"kubernetes.io/projected/ec9d0890-b994-4ada-a802-a43cbe2fc50e-kube-api-access-fjlhc\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252088 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-combined-ca-bundle\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252110 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-config-data\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252132 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-internal-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252169 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-scripts\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252208 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-public-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.252231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9d0890-b994-4ada-a802-a43cbe2fc50e-logs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.279844 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-79c85f78bf-qrrmn"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.281768 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.289771 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mmvt8" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.289932 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.289981 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.292312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79c85f78bf-qrrmn"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.359984 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data-custom\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360030 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360053 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-combined-ca-bundle\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360108 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlhc\" (UniqueName: \"kubernetes.io/projected/ec9d0890-b994-4ada-a802-a43cbe2fc50e-kube-api-access-fjlhc\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360165 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-combined-ca-bundle\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360188 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-config-data\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360216 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-internal-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-scripts\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360301 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-public-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360327 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9d0890-b994-4ada-a802-a43cbe2fc50e-logs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360345 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8cc8f5-bfab-490d-be14-44be8090fb21-logs\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.360367 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqln\" (UniqueName: \"kubernetes.io/projected/dc8cc8f5-bfab-490d-be14-44be8090fb21-kube-api-access-vfqln\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.368381 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9d0890-b994-4ada-a802-a43cbe2fc50e-logs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.368637 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-85957cbc8-r7xrw"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.370627 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.380212 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-combined-ca-bundle\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.385681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-scripts\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.388722 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.401331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-public-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.411535 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-config-data\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.411909 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9d0890-b994-4ada-a802-a43cbe2fc50e-internal-tls-certs\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.446718 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlhc\" (UniqueName: \"kubernetes.io/projected/ec9d0890-b994-4ada-a802-a43cbe2fc50e-kube-api-access-fjlhc\") pod \"placement-557678d96b-tqmtc\" (UID: \"ec9d0890-b994-4ada-a802-a43cbe2fc50e\") " pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461480 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461518 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nxz\" (UniqueName: \"kubernetes.io/projected/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-kube-api-access-78nxz\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461550 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-combined-ca-bundle\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461591 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461613 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data-custom\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461648 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-combined-ca-bundle\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8cc8f5-bfab-490d-be14-44be8090fb21-logs\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.461727 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqln\" (UniqueName: \"kubernetes.io/projected/dc8cc8f5-bfab-490d-be14-44be8090fb21-kube-api-access-vfqln\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.479012 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.479489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-logs\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.479606 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data-custom\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.480446 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.480777 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8cc8f5-bfab-490d-be14-44be8090fb21-logs\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.490466 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-combined-ca-bundle\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.492038 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.493246 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.497401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc8cc8f5-bfab-490d-be14-44be8090fb21-config-data-custom\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.517147 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqln\" (UniqueName: \"kubernetes.io/projected/dc8cc8f5-bfab-490d-be14-44be8090fb21-kube-api-access-vfqln\") pod \"barbican-worker-79c85f78bf-qrrmn\" (UID: \"dc8cc8f5-bfab-490d-be14-44be8090fb21\") " pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.518210 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85957cbc8-r7xrw"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.550209 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.550267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.550287 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581249 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581330 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-logs\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78nxz\" (UniqueName: \"kubernetes.io/projected/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-kube-api-access-78nxz\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581413 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581436 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581476 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data-custom\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581495 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581524 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-combined-ca-bundle\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581539 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.581594 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.583955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-logs\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.595592 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.611891 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.619322 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.620142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-config-data-custom\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.622568 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.622988 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79c85f78bf-qrrmn" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.624880 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-combined-ca-bundle\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.625491 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78nxz\" (UniqueName: \"kubernetes.io/projected/a1af432c-5ab8-4eb5-87f0-2f9519c1004b-kube-api-access-78nxz\") pod \"barbican-keystone-listener-85957cbc8-r7xrw\" (UID: \"a1af432c-5ab8-4eb5-87f0-2f9519c1004b\") " pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.627127 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.646330 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.662642 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.682572 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693091 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8cg\" (UniqueName: \"kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693324 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693364 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693399 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693466 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.693556 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.684564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.694281 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.695361 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.703147 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.710321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.757967 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629\") pod \"dnsmasq-dns-85ff748b95-r7jnf\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.798472 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.798513 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.798569 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.798619 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.798693 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8cg\" (UniqueName: \"kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.804257 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.826677 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8cg\" (UniqueName: \"kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.855238 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.867151 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.876791 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom\") pod \"barbican-api-5758749df4-tzq2d\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.902858 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.931466 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:45 crc kubenswrapper[4736]: I0214 11:01:45.981580 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:46 crc kubenswrapper[4736]: I0214 11:01:46.257588 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:46 crc kubenswrapper[4736]: I0214 11:01:46.257979 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:46 crc kubenswrapper[4736]: I0214 11:01:46.307422 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557678d96b-tqmtc"] Feb 14 11:01:46 crc kubenswrapper[4736]: I0214 11:01:46.532223 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79c85f78bf-qrrmn"] Feb 14 11:01:46 crc kubenswrapper[4736]: I0214 11:01:46.730179 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85957cbc8-r7xrw"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.003578 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.020796 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.066114 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141271 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141327 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141431 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141478 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.141601 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw8rk\" (UniqueName: \"kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk\") pod \"d43521c3-8892-4a34-af06-1d93a8f50c38\" (UID: \"d43521c3-8892-4a34-af06-1d93a8f50c38\") " Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.145869 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.160956 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk" (OuterVolumeSpecName: "kube-api-access-lw8rk") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "kube-api-access-lw8rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.168326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.168602 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts" (OuterVolumeSpecName: "scripts") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.230946 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.245976 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d43521c3-8892-4a34-af06-1d93a8f50c38-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.246015 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.246030 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw8rk\" (UniqueName: \"kubernetes.io/projected/d43521c3-8892-4a34-af06-1d93a8f50c38-kube-api-access-lw8rk\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.246043 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.246054 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.254096 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data" (OuterVolumeSpecName: "config-data") pod "d43521c3-8892-4a34-af06-1d93a8f50c38" (UID: "d43521c3-8892-4a34-af06-1d93a8f50c38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.311692 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c85f78bf-qrrmn" event={"ID":"dc8cc8f5-bfab-490d-be14-44be8090fb21","Type":"ContainerStarted","Data":"b75065ff8cbb2de7d4d87e50efd55b31314cffca21343f04e53d30a6418163fc"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.320853 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" event={"ID":"626458da-6f6a-4fbd-9eb6-cbf9120bdb32","Type":"ContainerStarted","Data":"f26d15d49d4c1a43e754422664f0ba0284e8023ab280d03315605074265d92d5"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.348292 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d43521c3-8892-4a34-af06-1d93a8f50c38-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.351949 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557678d96b-tqmtc" event={"ID":"ec9d0890-b994-4ada-a802-a43cbe2fc50e","Type":"ContainerStarted","Data":"76044af5bf228a0817c6df942aece15b4b2c7eed3d7c72fc3f6c40c4b0b6b587"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.351985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557678d96b-tqmtc" event={"ID":"ec9d0890-b994-4ada-a802-a43cbe2fc50e","Type":"ContainerStarted","Data":"b82ce8c445c2482d2fb9b94d6d9a975cc8eaf50ae7bfed04a9c1cf3a0f0de055"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.353845 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9bdr9" event={"ID":"d43521c3-8892-4a34-af06-1d93a8f50c38","Type":"ContainerDied","Data":"8aa4fc902c564de30741071f4244f41e82c8be0146f266cf515e31ada855b9d7"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.353896 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aa4fc902c564de30741071f4244f41e82c8be0146f266cf515e31ada855b9d7" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.353945 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9bdr9" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.368970 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerStarted","Data":"1cfa2c678bb87e386f6b2842feef808a69d0eb91e5b2e24664c90e05c54cd448"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.376966 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" event={"ID":"a1af432c-5ab8-4eb5-87f0-2f9519c1004b","Type":"ContainerStarted","Data":"0df1b9040e5a2aa47634196ca117ebdcf5241e3365a1c98f1d5a0bc1f74a800b"} Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.377033 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.377043 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.622850 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:01:47 crc kubenswrapper[4736]: E0214 11:01:47.623494 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" containerName="cinder-db-sync" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.623505 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" containerName="cinder-db-sync" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.623672 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" containerName="cinder-db-sync" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.624586 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.637077 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.644086 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jplsk" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.644310 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653563 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653593 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653614 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqr9\" (UniqueName: \"kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653637 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.653678 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.654031 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.659400 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.709205 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.709249 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.757964 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.758077 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.758109 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.758145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqr9\" (UniqueName: \"kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.758168 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.758224 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.759298 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.791485 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.833279 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.834678 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.850760 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.859882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.863914 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882430 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882693 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7xq\" (UniqueName: \"kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882829 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.882943 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.883785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.917084 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.917877 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.918286 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqr9\" (UniqueName: \"kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9\") pod \"cinder-scheduler-0\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " pod="openstack/cinder-scheduler-0" Feb 14 11:01:47 crc kubenswrapper[4736]: I0214 11:01:47.965899 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:47.998484 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.014281 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.028159 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.060794 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.060944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7xq\" (UniqueName: \"kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061096 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061281 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061398 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061528 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061601 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061622 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061820 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061880 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061920 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.061974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfktd\" (UniqueName: \"kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.072962 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.073500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.074081 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.089270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.097403 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.171140 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7xq\" (UniqueName: \"kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq\") pod \"dnsmasq-dns-5c9776ccc5-wqmzf\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191178 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191420 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191556 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191582 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.204817 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.191612 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfktd\" (UniqueName: \"kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.211290 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.214721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.222882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.230046 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.233314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.233434 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfktd\" (UniqueName: \"kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd\") pod \"cinder-api-0\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.353660 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.371205 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.454573 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerStarted","Data":"f15edd923a2b1dac96621e7a1ff5396326986364cdad085becaffe3c6cd1d1cd"} Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.464552 4736 generic.go:334] "Generic (PLEG): container finished" podID="626458da-6f6a-4fbd-9eb6-cbf9120bdb32" containerID="e0b4b6cff6e9305bd6ac4fc2a8a3838b41c5298f45909180a365eed3e88ef298" exitCode=0 Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.464602 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" event={"ID":"626458da-6f6a-4fbd-9eb6-cbf9120bdb32","Type":"ContainerDied","Data":"e0b4b6cff6e9305bd6ac4fc2a8a3838b41c5298f45909180a365eed3e88ef298"} Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.505911 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.505934 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.507499 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557678d96b-tqmtc" event={"ID":"ec9d0890-b994-4ada-a802-a43cbe2fc50e","Type":"ContainerStarted","Data":"897dfe675ee397c429ac8c297cd32488a7ccd259a9ca6d034dc1c0ce69fa1687"} Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.507545 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.507575 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.780134 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-557678d96b-tqmtc" podStartSLOduration=3.780119236 podStartE2EDuration="3.780119236s" podCreationTimestamp="2026-02-14 11:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:48.544939654 +0000 UTC m=+1218.913567022" watchObservedRunningTime="2026-02-14 11:01:48.780119236 +0000 UTC m=+1219.148746604" Feb 14 11:01:48 crc kubenswrapper[4736]: I0214 11:01:48.786556 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.180262 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.193418 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:01:49 crc kubenswrapper[4736]: W0214 11:01:49.240849 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca482914_1fef_4b08_a3c6_5b1418426443.slice/crio-eaa4573d09bd4d038e2381dfb221aa2b910eb85499ae46f02665184dd6a39c2d WatchSource:0}: Error finding container eaa4573d09bd4d038e2381dfb221aa2b910eb85499ae46f02665184dd6a39c2d: Status 404 returned error can't find the container with id eaa4573d09bd4d038e2381dfb221aa2b910eb85499ae46f02665184dd6a39c2d Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.357787 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.532918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" event={"ID":"ca482914-1fef-4b08-a3c6-5b1418426443","Type":"ContainerStarted","Data":"eaa4573d09bd4d038e2381dfb221aa2b910eb85499ae46f02665184dd6a39c2d"} Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.535237 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.535689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-r7jnf" event={"ID":"626458da-6f6a-4fbd-9eb6-cbf9120bdb32","Type":"ContainerDied","Data":"f26d15d49d4c1a43e754422664f0ba0284e8023ab280d03315605074265d92d5"} Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.535756 4736 scope.go:117] "RemoveContainer" containerID="e0b4b6cff6e9305bd6ac4fc2a8a3838b41c5298f45909180a365eed3e88ef298" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.541483 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.541713 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.541875 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.541963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.542151 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.542305 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config\") pod \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\" (UID: \"626458da-6f6a-4fbd-9eb6-cbf9120bdb32\") " Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.558323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerStarted","Data":"448b6ccadf615f147430eb11f16cfc37c9dfe4820ec6984b16fb18c84b4b7ad2"} Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.558545 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.558566 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.569987 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerStarted","Data":"fa8fc7056a9887d73ac68b2abc51dcd4f32f90a828e0cf17e778ab3d7a4842e0"} Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.590512 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5758749df4-tzq2d" podStartSLOduration=4.590499358 podStartE2EDuration="4.590499358s" podCreationTimestamp="2026-02-14 11:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:49.58816474 +0000 UTC m=+1219.956792108" watchObservedRunningTime="2026-02-14 11:01:49.590499358 +0000 UTC m=+1219.959126726" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.591941 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerStarted","Data":"5d63857f296d5196291e47c0ceb425acd17b2e2316a3b8d07ad9e379f9f2d062"} Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.592890 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629" (OuterVolumeSpecName: "kube-api-access-nf629") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "kube-api-access-nf629". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.593264 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.624232 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.624536 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.628135 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.644987 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.645014 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.645025 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-kube-api-access-nf629\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.645033 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.645044 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.646975 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config" (OuterVolumeSpecName: "config") pod "626458da-6f6a-4fbd-9eb6-cbf9120bdb32" (UID: "626458da-6f6a-4fbd-9eb6-cbf9120bdb32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.747960 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626458da-6f6a-4fbd-9eb6-cbf9120bdb32-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.926943 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:49 crc kubenswrapper[4736]: I0214 11:01:49.933008 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-r7jnf"] Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.272281 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.408203 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626458da-6f6a-4fbd-9eb6-cbf9120bdb32" path="/var/lib/kubelet/pods/626458da-6f6a-4fbd-9eb6-cbf9120bdb32/volumes" Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.435851 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.618404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerStarted","Data":"ab46d7f0994805bd29196ff2b7b911a75314d062faab1bd6a6dedf470686cd0a"} Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.628163 4736 generic.go:334] "Generic (PLEG): container finished" podID="ca482914-1fef-4b08-a3c6-5b1418426443" containerID="bdf158c3edd14e4da655d8c8f8f560d911e75c574f29a31e9a15e3a673357c73" exitCode=0 Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.628211 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" event={"ID":"ca482914-1fef-4b08-a3c6-5b1418426443","Type":"ContainerDied","Data":"bdf158c3edd14e4da655d8c8f8f560d911e75c574f29a31e9a15e3a673357c73"} Feb 14 11:01:50 crc kubenswrapper[4736]: I0214 11:01:50.852602 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.476578 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79d9bb575d-6pwpg"] Feb 14 11:01:52 crc kubenswrapper[4736]: E0214 11:01:52.483353 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626458da-6f6a-4fbd-9eb6-cbf9120bdb32" containerName="init" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.483390 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="626458da-6f6a-4fbd-9eb6-cbf9120bdb32" containerName="init" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.483638 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="626458da-6f6a-4fbd-9eb6-cbf9120bdb32" containerName="init" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.484530 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.495458 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.495796 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.509277 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79d9bb575d-6pwpg"] Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.604844 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-public-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605088 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-internal-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605143 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqs6h\" (UniqueName: \"kubernetes.io/projected/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-kube-api-access-rqs6h\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605179 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-logs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605205 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-combined-ca-bundle\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.605267 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data-custom\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.661933 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" event={"ID":"a1af432c-5ab8-4eb5-87f0-2f9519c1004b","Type":"ContainerStarted","Data":"2eceb6af54e69fa5472432aadb691be6aa8b4aab35bda866f6c26f0140c308f4"} Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.667707 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c85f78bf-qrrmn" event={"ID":"dc8cc8f5-bfab-490d-be14-44be8090fb21","Type":"ContainerStarted","Data":"8ec1dd544de394c1edd7dae42793acc1291ec4a4b0d452217530ef3e24f63627"} Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.678930 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" event={"ID":"ca482914-1fef-4b08-a3c6-5b1418426443","Type":"ContainerStarted","Data":"5b14e621278328e3b0f7d188c69fe4737f4603c8ee4a047cc270322b9f431c20"} Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.679922 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.704700 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" podStartSLOduration=5.704682006 podStartE2EDuration="5.704682006s" podCreationTimestamp="2026-02-14 11:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:52.699868297 +0000 UTC m=+1223.068495665" watchObservedRunningTime="2026-02-14 11:01:52.704682006 +0000 UTC m=+1223.073309364" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.706894 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.706983 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data-custom\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.707008 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-public-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.707112 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-internal-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.707212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqs6h\" (UniqueName: \"kubernetes.io/projected/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-kube-api-access-rqs6h\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.707290 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-logs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.707333 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-combined-ca-bundle\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.708789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-logs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.713652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.714201 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-internal-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.714663 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-combined-ca-bundle\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.717163 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-config-data-custom\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.726266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-public-tls-certs\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.726712 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqs6h\" (UniqueName: \"kubernetes.io/projected/fe4bb48e-4d5f-4b38-b862-d2fe632087a8-kube-api-access-rqs6h\") pod \"barbican-api-79d9bb575d-6pwpg\" (UID: \"fe4bb48e-4d5f-4b38-b862-d2fe632087a8\") " pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.810367 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:01:52 crc kubenswrapper[4736]: I0214 11:01:52.843369 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.323703 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.324557 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77f6fd57bc-nlqb5" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-api" containerID="cri-o://07e1681a68e5e0fff31272879d764409fed468fa7ef7cde47aa3cf5cadb8c5d7" gracePeriod=30 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.324695 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77f6fd57bc-nlqb5" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-httpd" containerID="cri-o://22c5d0275efb993b2fcf4238f8b3c5bc3be2100cd0a6bb245f216c7f2dc32105" gracePeriod=30 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.365335 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.410247 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7979c77cb9-ql2gq"] Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.415466 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.427895 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7979c77cb9-ql2gq"] Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-internal-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581556 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581578 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-public-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581642 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-combined-ca-bundle\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-httpd-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581713 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb8bv\" (UniqueName: \"kubernetes.io/projected/8af015af-390d-4300-95e1-976c308f136c-kube-api-access-lb8bv\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.581782 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-ovndb-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684304 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-internal-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684342 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-public-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684425 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-combined-ca-bundle\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684470 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-httpd-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684501 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb8bv\" (UniqueName: \"kubernetes.io/projected/8af015af-390d-4300-95e1-976c308f136c-kube-api-access-lb8bv\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.684536 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-ovndb-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.697614 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-internal-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.705435 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-ovndb-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.706100 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-httpd-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.728315 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-combined-ca-bundle\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.728817 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-public-tls-certs\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.749077 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8af015af-390d-4300-95e1-976c308f136c-config\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.749210 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" event={"ID":"a1af432c-5ab8-4eb5-87f0-2f9519c1004b","Type":"ContainerStarted","Data":"2a88cddc1466f3c29ebea94ffe61a4a6ee3228a28302249e5b79c3fd99d903ee"} Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.779959 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb8bv\" (UniqueName: \"kubernetes.io/projected/8af015af-390d-4300-95e1-976c308f136c-kube-api-access-lb8bv\") pod \"neutron-7979c77cb9-ql2gq\" (UID: \"8af015af-390d-4300-95e1-976c308f136c\") " pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.805273 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c85f78bf-qrrmn" event={"ID":"dc8cc8f5-bfab-490d-be14-44be8090fb21","Type":"ContainerStarted","Data":"aa484d38de2cbaa4dc3770a5540c8e92729a565f657c9b1786fd8ade767802d9"} Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.810668 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79d9bb575d-6pwpg"] Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.821911 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.858394 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-85957cbc8-r7xrw" podStartSLOduration=3.797368047 podStartE2EDuration="8.858378312s" podCreationTimestamp="2026-02-14 11:01:45 +0000 UTC" firstStartedPulling="2026-02-14 11:01:46.843543936 +0000 UTC m=+1217.212171304" lastFinishedPulling="2026-02-14 11:01:51.904554201 +0000 UTC m=+1222.273181569" observedRunningTime="2026-02-14 11:01:53.806630663 +0000 UTC m=+1224.175258041" watchObservedRunningTime="2026-02-14 11:01:53.858378312 +0000 UTC m=+1224.227005680" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.883912 4736 generic.go:334] "Generic (PLEG): container finished" podID="2620f316-944b-449d-88cf-60670074d345" containerID="62cd66d80e587c5b3ee68706657070a4caf3c084aefb55a5cab8454c9516dba4" exitCode=137 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.883949 4736 generic.go:334] "Generic (PLEG): container finished" podID="2620f316-944b-449d-88cf-60670074d345" containerID="d83ed0c2093eb543cc31edff4ef23ff64946945c062191c857d39c565671171e" exitCode=137 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.884004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerDied","Data":"62cd66d80e587c5b3ee68706657070a4caf3c084aefb55a5cab8454c9516dba4"} Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.884035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerDied","Data":"d83ed0c2093eb543cc31edff4ef23ff64946945c062191c857d39c565671171e"} Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.908478 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api-log" containerID="cri-o://ab46d7f0994805bd29196ff2b7b911a75314d062faab1bd6a6dedf470686cd0a" gracePeriod=30 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.908684 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerStarted","Data":"68af9810b3add1464a6f54af1e8d74133a728ce46f22ba9636d7eb517c22b403"} Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.908790 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api" containerID="cri-o://68af9810b3add1464a6f54af1e8d74133a728ce46f22ba9636d7eb517c22b403" gracePeriod=30 Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.908884 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 11:01:53 crc kubenswrapper[4736]: I0214 11:01:53.916196 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-79c85f78bf-qrrmn" podStartSLOduration=3.574302055 podStartE2EDuration="8.916180926s" podCreationTimestamp="2026-02-14 11:01:45 +0000 UTC" firstStartedPulling="2026-02-14 11:01:46.55990095 +0000 UTC m=+1216.928528318" lastFinishedPulling="2026-02-14 11:01:51.901779821 +0000 UTC m=+1222.270407189" observedRunningTime="2026-02-14 11:01:53.83381639 +0000 UTC m=+1224.202443758" watchObservedRunningTime="2026-02-14 11:01:53.916180926 +0000 UTC m=+1224.284808294" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.019668 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.019648803 podStartE2EDuration="7.019648803s" podCreationTimestamp="2026-02-14 11:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:53.998765428 +0000 UTC m=+1224.367392796" watchObservedRunningTime="2026-02-14 11:01:54.019648803 +0000 UTC m=+1224.388276181" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.225239 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.321969 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key\") pod \"2620f316-944b-449d-88cf-60670074d345\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.322028 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts\") pod \"2620f316-944b-449d-88cf-60670074d345\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.322057 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g647v\" (UniqueName: \"kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v\") pod \"2620f316-944b-449d-88cf-60670074d345\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.322086 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data\") pod \"2620f316-944b-449d-88cf-60670074d345\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.322136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs\") pod \"2620f316-944b-449d-88cf-60670074d345\" (UID: \"2620f316-944b-449d-88cf-60670074d345\") " Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.322793 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs" (OuterVolumeSpecName: "logs") pod "2620f316-944b-449d-88cf-60670074d345" (UID: "2620f316-944b-449d-88cf-60670074d345"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.355041 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v" (OuterVolumeSpecName: "kube-api-access-g647v") pod "2620f316-944b-449d-88cf-60670074d345" (UID: "2620f316-944b-449d-88cf-60670074d345"). InnerVolumeSpecName "kube-api-access-g647v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.360343 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2620f316-944b-449d-88cf-60670074d345" (UID: "2620f316-944b-449d-88cf-60670074d345"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.386761 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data" (OuterVolumeSpecName: "config-data") pod "2620f316-944b-449d-88cf-60670074d345" (UID: "2620f316-944b-449d-88cf-60670074d345"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.398161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts" (OuterVolumeSpecName: "scripts") pod "2620f316-944b-449d-88cf-60670074d345" (UID: "2620f316-944b-449d-88cf-60670074d345"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.423916 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2620f316-944b-449d-88cf-60670074d345-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.424145 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.424231 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g647v\" (UniqueName: \"kubernetes.io/projected/2620f316-944b-449d-88cf-60670074d345-kube-api-access-g647v\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.424286 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2620f316-944b-449d-88cf-60670074d345-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.424342 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2620f316-944b-449d-88cf-60670074d345-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.747058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7979c77cb9-ql2gq"] Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.770965 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.771076 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.955411 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5644b876d5-wp4lb" event={"ID":"2620f316-944b-449d-88cf-60670074d345","Type":"ContainerDied","Data":"027640721f6d0ff37b3530b378a7be001676b46b094192d9197d31bbef5aa6c7"} Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.955461 4736 scope.go:117] "RemoveContainer" containerID="62cd66d80e587c5b3ee68706657070a4caf3c084aefb55a5cab8454c9516dba4" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.955577 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5644b876d5-wp4lb" Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.962371 4736 generic.go:334] "Generic (PLEG): container finished" podID="45523272-ea03-44ab-a51a-1759b0514f47" containerID="ab46d7f0994805bd29196ff2b7b911a75314d062faab1bd6a6dedf470686cd0a" exitCode=143 Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.962429 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerDied","Data":"ab46d7f0994805bd29196ff2b7b911a75314d062faab1bd6a6dedf470686cd0a"} Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.973130 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d9bb575d-6pwpg" event={"ID":"fe4bb48e-4d5f-4b38-b862-d2fe632087a8","Type":"ContainerStarted","Data":"bf15b17f2cb365b5242609f70d3a426bdd933c8618464d00cf0626e98a14f335"} Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.973180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d9bb575d-6pwpg" event={"ID":"fe4bb48e-4d5f-4b38-b862-d2fe632087a8","Type":"ContainerStarted","Data":"4042ab767dcd60cdda11edec43764a6b14c5e2aa6b2177b3d41a0c454b291005"} Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.984497 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c911912-7053-4fc2-a31e-20bcce081834" containerID="22c5d0275efb993b2fcf4238f8b3c5bc3be2100cd0a6bb245f216c7f2dc32105" exitCode=0 Feb 14 11:01:54 crc kubenswrapper[4736]: I0214 11:01:54.984600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerDied","Data":"22c5d0275efb993b2fcf4238f8b3c5bc3be2100cd0a6bb245f216c7f2dc32105"} Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.033771 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerStarted","Data":"7cc377b8d9123e223ac085493dd585a2ba8187734ac224f8bb152744bb265e25"} Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.042521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7979c77cb9-ql2gq" event={"ID":"8af015af-390d-4300-95e1-976c308f136c","Type":"ContainerStarted","Data":"f2843e66a168fd662112bd18bb980b83323760d436707758c66daeb944225548"} Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.061075 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.067953 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5644b876d5-wp4lb"] Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.174991 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-77f6fd57bc-nlqb5" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.353566 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.353713 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.377892 4736 scope.go:117] "RemoveContainer" containerID="d83ed0c2093eb543cc31edff4ef23ff64946945c062191c857d39c565671171e" Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.688780 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 11:01:55 crc kubenswrapper[4736]: I0214 11:01:55.912636 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.167292 4736 generic.go:334] "Generic (PLEG): container finished" podID="45523272-ea03-44ab-a51a-1759b0514f47" containerID="68af9810b3add1464a6f54af1e8d74133a728ce46f22ba9636d7eb517c22b403" exitCode=0 Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.167367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerDied","Data":"68af9810b3add1464a6f54af1e8d74133a728ce46f22ba9636d7eb517c22b403"} Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.183594 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d9bb575d-6pwpg" event={"ID":"fe4bb48e-4d5f-4b38-b862-d2fe632087a8","Type":"ContainerStarted","Data":"1e614d75555e348a367ee6d85bbd8604344f1c41d77d310212da3fb713ce95bd"} Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.183684 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.183710 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.200720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerStarted","Data":"1727d8fb96c718f44908d2de2ada62508abb3abbd7310c9038f4f6fda08fd773"} Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.234447 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79d9bb575d-6pwpg" podStartSLOduration=4.234425371 podStartE2EDuration="4.234425371s" podCreationTimestamp="2026-02-14 11:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:56.226723668 +0000 UTC m=+1226.595351036" watchObservedRunningTime="2026-02-14 11:01:56.234425371 +0000 UTC m=+1226.603052729" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.253398 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.7566726710000005 podStartE2EDuration="9.253367509s" podCreationTimestamp="2026-02-14 11:01:47 +0000 UTC" firstStartedPulling="2026-02-14 11:01:48.849360921 +0000 UTC m=+1219.217988289" lastFinishedPulling="2026-02-14 11:01:52.346055759 +0000 UTC m=+1222.714683127" observedRunningTime="2026-02-14 11:01:56.251635079 +0000 UTC m=+1226.620262447" watchObservedRunningTime="2026-02-14 11:01:56.253367509 +0000 UTC m=+1226.621994867" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.255954 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7979c77cb9-ql2gq" event={"ID":"8af015af-390d-4300-95e1-976c308f136c","Type":"ContainerStarted","Data":"382abad8a9cbd74fd46d7b55b388be41e3849945fccf90cb7886a354efc5fc8f"} Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.256007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7979c77cb9-ql2gq" event={"ID":"8af015af-390d-4300-95e1-976c308f136c","Type":"ContainerStarted","Data":"265ec1cd7c8f60a1dc473639cc202fac45099c27968572fcce2b1264e5d488e7"} Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.256990 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.305132 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7979c77cb9-ql2gq" podStartSLOduration=3.305106588 podStartE2EDuration="3.305106588s" podCreationTimestamp="2026-02-14 11:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:01:56.28516092 +0000 UTC m=+1226.653788298" watchObservedRunningTime="2026-02-14 11:01:56.305106588 +0000 UTC m=+1226.673733976" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.392118 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.413456 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2620f316-944b-449d-88cf-60670074d345" path="/var/lib/kubelet/pods/2620f316-944b-449d-88cf-60670074d345/volumes" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481176 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481297 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481322 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481338 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfktd\" (UniqueName: \"kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481386 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481413 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.481518 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id\") pod \"45523272-ea03-44ab-a51a-1759b0514f47\" (UID: \"45523272-ea03-44ab-a51a-1759b0514f47\") " Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.482026 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs" (OuterVolumeSpecName: "logs") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.484868 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.514017 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd" (OuterVolumeSpecName: "kube-api-access-pfktd") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "kube-api-access-pfktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.516200 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.516345 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts" (OuterVolumeSpecName: "scripts") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.548103 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583200 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45523272-ea03-44ab-a51a-1759b0514f47-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583235 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583244 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583253 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45523272-ea03-44ab-a51a-1759b0514f47-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583262 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfktd\" (UniqueName: \"kubernetes.io/projected/45523272-ea03-44ab-a51a-1759b0514f47-kube-api-access-pfktd\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.583272 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.585772 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data" (OuterVolumeSpecName: "config-data") pod "45523272-ea03-44ab-a51a-1759b0514f47" (UID: "45523272-ea03-44ab-a51a-1759b0514f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:01:56 crc kubenswrapper[4736]: I0214 11:01:56.684983 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45523272-ea03-44ab-a51a-1759b0514f47-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.275551 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.275847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45523272-ea03-44ab-a51a-1759b0514f47","Type":"ContainerDied","Data":"5d63857f296d5196291e47c0ceb425acd17b2e2316a3b8d07ad9e379f9f2d062"} Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.275900 4736 scope.go:117] "RemoveContainer" containerID="68af9810b3add1464a6f54af1e8d74133a728ce46f22ba9636d7eb517c22b403" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.307004 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.307670 4736 scope.go:117] "RemoveContainer" containerID="ab46d7f0994805bd29196ff2b7b911a75314d062faab1bd6a6dedf470686cd0a" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.330515 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.359501 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:57 crc kubenswrapper[4736]: E0214 11:01:57.359883 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api-log" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.359900 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api-log" Feb 14 11:01:57 crc kubenswrapper[4736]: E0214 11:01:57.359914 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.359921 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api" Feb 14 11:01:57 crc kubenswrapper[4736]: E0214 11:01:57.359935 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.359941 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon" Feb 14 11:01:57 crc kubenswrapper[4736]: E0214 11:01:57.359966 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon-log" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.359971 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon-log" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.360133 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.360149 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.360170 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2620f316-944b-449d-88cf-60670074d345" containerName="horizon-log" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.360180 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="45523272-ea03-44ab-a51a-1759b0514f47" containerName="cinder-api-log" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.361099 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.365734 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.367035 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.369024 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.391473 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.395829 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.401504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32441c5d-4041-4687-b31b-fb121c4d01a7-logs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.401708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-scripts\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.402730 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.402903 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32441c5d-4041-4687-b31b-fb121c4d01a7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.403103 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data-custom\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.403182 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9tb\" (UniqueName: \"kubernetes.io/projected/32441c5d-4041-4687-b31b-fb121c4d01a7-kube-api-access-5f9tb\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.403305 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.403402 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.506688 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.506976 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507115 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32441c5d-4041-4687-b31b-fb121c4d01a7-logs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-scripts\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507397 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32441c5d-4041-4687-b31b-fb121c4d01a7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data-custom\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507579 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9tb\" (UniqueName: \"kubernetes.io/projected/32441c5d-4041-4687-b31b-fb121c4d01a7-kube-api-access-5f9tb\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507663 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.507522 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32441c5d-4041-4687-b31b-fb121c4d01a7-logs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.508066 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32441c5d-4041-4687-b31b-fb121c4d01a7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.511009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-scripts\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.524038 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9tb\" (UniqueName: \"kubernetes.io/projected/32441c5d-4041-4687-b31b-fb121c4d01a7-kube-api-access-5f9tb\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.525328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.534439 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.535214 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data-custom\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.536809 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-config-data\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.539722 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32441c5d-4041-4687-b31b-fb121c4d01a7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32441c5d-4041-4687-b31b-fb121c4d01a7\") " pod="openstack/cinder-api-0" Feb 14 11:01:57 crc kubenswrapper[4736]: I0214 11:01:57.689096 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 11:01:58 crc kubenswrapper[4736]: I0214 11:01:58.028765 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 11:01:58 crc kubenswrapper[4736]: I0214 11:01:58.354878 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:01:58 crc kubenswrapper[4736]: I0214 11:01:58.417383 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45523272-ea03-44ab-a51a-1759b0514f47" path="/var/lib/kubelet/pods/45523272-ea03-44ab-a51a-1759b0514f47/volumes" Feb 14 11:01:58 crc kubenswrapper[4736]: I0214 11:01:58.447129 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:01:58 crc kubenswrapper[4736]: I0214 11:01:58.447571 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" containerID="cri-o://adb59dd9fd022260cdba7dc19c6e034ca0a99277c8227c9297f67a6c41fdb2d5" gracePeriod=10 Feb 14 11:01:59 crc kubenswrapper[4736]: I0214 11:01:59.359042 4736 generic.go:334] "Generic (PLEG): container finished" podID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerID="adb59dd9fd022260cdba7dc19c6e034ca0a99277c8227c9297f67a6c41fdb2d5" exitCode=0 Feb 14 11:01:59 crc kubenswrapper[4736]: I0214 11:01:59.359259 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" event={"ID":"fba91db3-6b2e-40fa-87dd-9211f5976bec","Type":"ContainerDied","Data":"adb59dd9fd022260cdba7dc19c6e034ca0a99277c8227c9297f67a6c41fdb2d5"} Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.023961 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5758749df4-tzq2d" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.024303 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.272591 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.272702 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.273576 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad"} pod="openstack/horizon-54b8d5f54d-bvjc4" containerMessage="Container horizon failed startup probe, will be restarted" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.273627 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" containerID="cri-o://e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad" gracePeriod=30 Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.435843 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.436115 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.436843 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b"} pod="openstack/horizon-78d96c5d8-mfqqp" containerMessage="Container horizon failed startup probe, will be restarted" Feb 14 11:02:00 crc kubenswrapper[4736]: I0214 11:02:00.436878 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" containerID="cri-o://04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b" gracePeriod=30 Feb 14 11:02:01 crc kubenswrapper[4736]: I0214 11:02:01.026877 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5758749df4-tzq2d" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:02:02 crc kubenswrapper[4736]: I0214 11:02:02.388057 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c911912-7053-4fc2-a31e-20bcce081834" containerID="07e1681a68e5e0fff31272879d764409fed468fa7ef7cde47aa3cf5cadb8c5d7" exitCode=0 Feb 14 11:02:02 crc kubenswrapper[4736]: I0214 11:02:02.388103 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerDied","Data":"07e1681a68e5e0fff31272879d764409fed468fa7ef7cde47aa3cf5cadb8c5d7"} Feb 14 11:02:02 crc kubenswrapper[4736]: I0214 11:02:02.510056 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Feb 14 11:02:03 crc kubenswrapper[4736]: I0214 11:02:03.379330 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 11:02:03 crc kubenswrapper[4736]: I0214 11:02:03.453023 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:03 crc kubenswrapper[4736]: I0214 11:02:03.453991 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="cinder-scheduler" containerID="cri-o://7cc377b8d9123e223ac085493dd585a2ba8187734ac224f8bb152744bb265e25" gracePeriod=30 Feb 14 11:02:03 crc kubenswrapper[4736]: I0214 11:02:03.454189 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="probe" containerID="cri-o://1727d8fb96c718f44908d2de2ada62508abb3abbd7310c9038f4f6fda08fd773" gracePeriod=30 Feb 14 11:02:03 crc kubenswrapper[4736]: I0214 11:02:03.890771 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:02:04 crc kubenswrapper[4736]: I0214 11:02:04.423356 4736 generic.go:334] "Generic (PLEG): container finished" podID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerID="1727d8fb96c718f44908d2de2ada62508abb3abbd7310c9038f4f6fda08fd773" exitCode=0 Feb 14 11:02:04 crc kubenswrapper[4736]: I0214 11:02:04.423441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerDied","Data":"1727d8fb96c718f44908d2de2ada62508abb3abbd7310c9038f4f6fda08fd773"} Feb 14 11:02:04 crc kubenswrapper[4736]: I0214 11:02:04.957565 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:02:05 crc kubenswrapper[4736]: I0214 11:02:05.454248 4736 generic.go:334] "Generic (PLEG): container finished" podID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerID="7cc377b8d9123e223ac085493dd585a2ba8187734ac224f8bb152744bb265e25" exitCode=0 Feb 14 11:02:05 crc kubenswrapper[4736]: I0214 11:02:05.454295 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerDied","Data":"7cc377b8d9123e223ac085493dd585a2ba8187734ac224f8bb152744bb265e25"} Feb 14 11:02:06 crc kubenswrapper[4736]: I0214 11:02:06.440370 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79d9bb575d-6pwpg" Feb 14 11:02:06 crc kubenswrapper[4736]: I0214 11:02:06.533485 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:02:06 crc kubenswrapper[4736]: I0214 11:02:06.533719 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5758749df4-tzq2d" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api-log" containerID="cri-o://f15edd923a2b1dac96621e7a1ff5396326986364cdad085becaffe3c6cd1d1cd" gracePeriod=30 Feb 14 11:02:06 crc kubenswrapper[4736]: I0214 11:02:06.533875 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5758749df4-tzq2d" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" containerID="cri-o://448b6ccadf615f147430eb11f16cfc37c9dfe4820ec6984b16fb18c84b4b7ad2" gracePeriod=30 Feb 14 11:02:07 crc kubenswrapper[4736]: I0214 11:02:07.355702 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7677d9df65-nl5rx" Feb 14 11:02:07 crc kubenswrapper[4736]: I0214 11:02:07.516210 4736 generic.go:334] "Generic (PLEG): container finished" podID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerID="f15edd923a2b1dac96621e7a1ff5396326986364cdad085becaffe3c6cd1d1cd" exitCode=143 Feb 14 11:02:07 crc kubenswrapper[4736]: I0214 11:02:07.516253 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerDied","Data":"f15edd923a2b1dac96621e7a1ff5396326986364cdad085becaffe3c6cd1d1cd"} Feb 14 11:02:09 crc kubenswrapper[4736]: E0214 11:02:09.210542 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 14 11:02:09 crc kubenswrapper[4736]: E0214 11:02:09.211229 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b93585de-a12c-446d-a045-16d74eb6d7db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 11:02:09 crc kubenswrapper[4736]: E0214 11:02:09.213621 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.392451 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.397198 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.440034 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511381 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511638 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511670 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511701 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511733 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511789 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhqr9\" (UniqueName: \"kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511823 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cgvp\" (UniqueName: \"kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511844 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7wbq\" (UniqueName: \"kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511891 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511905 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511935 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511956 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.511991 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.512007 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.512028 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data\") pod \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\" (UID: \"d1366cbc-87c7-41ec-9baf-647cfdb2add9\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.512114 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.512133 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config\") pod \"fba91db3-6b2e-40fa-87dd-9211f5976bec\" (UID: \"fba91db3-6b2e-40fa-87dd-9211f5976bec\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.512156 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle\") pod \"2c911912-7053-4fc2-a31e-20bcce081834\" (UID: \"2c911912-7053-4fc2-a31e-20bcce081834\") " Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.514331 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.533818 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts" (OuterVolumeSpecName: "scripts") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.540907 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp" (OuterVolumeSpecName: "kube-api-access-2cgvp") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "kube-api-access-2cgvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.547159 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq" (OuterVolumeSpecName: "kube-api-access-f7wbq") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "kube-api-access-f7wbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.547752 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.566897 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.568600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f6fd57bc-nlqb5" event={"ID":"2c911912-7053-4fc2-a31e-20bcce081834","Type":"ContainerDied","Data":"290dcd17cab0995c23cf6a6b09cdf87096156d00c073da69d9ce2d36114c41d3"} Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.568785 4736 scope.go:117] "RemoveContainer" containerID="22c5d0275efb993b2fcf4238f8b3c5bc3be2100cd0a6bb245f216c7f2dc32105" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.569091 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f6fd57bc-nlqb5" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.572253 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9" (OuterVolumeSpecName: "kube-api-access-lhqr9") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "kube-api-access-lhqr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.581855 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" event={"ID":"fba91db3-6b2e-40fa-87dd-9211f5976bec","Type":"ContainerDied","Data":"982e16a5d17dade46bfe78ff6ee5618251eacfafe0291b41518e206dc0ae1596"} Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.581931 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.617035 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-central-agent" containerID="cri-o://4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1" gracePeriod=30 Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.617334 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.617777 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1366cbc-87c7-41ec-9baf-647cfdb2add9","Type":"ContainerDied","Data":"fa8fc7056a9887d73ac68b2abc51dcd4f32f90a828e0cf17e778ab3d7a4842e0"} Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.618060 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="sg-core" containerID="cri-o://e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d" gracePeriod=30 Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.618116 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-notification-agent" containerID="cri-o://a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a" gracePeriod=30 Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625431 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625451 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhqr9\" (UniqueName: \"kubernetes.io/projected/d1366cbc-87c7-41ec-9baf-647cfdb2add9-kube-api-access-lhqr9\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625460 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cgvp\" (UniqueName: \"kubernetes.io/projected/2c911912-7053-4fc2-a31e-20bcce081834-kube-api-access-2cgvp\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625468 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7wbq\" (UniqueName: \"kubernetes.io/projected/fba91db3-6b2e-40fa-87dd-9211f5976bec-kube-api-access-f7wbq\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625476 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625487 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1366cbc-87c7-41ec-9baf-647cfdb2add9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.625496 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.792590 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.849924 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.856350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.872716 4736 scope.go:117] "RemoveContainer" containerID="07e1681a68e5e0fff31272879d764409fed468fa7ef7cde47aa3cf5cadb8c5d7" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.887546 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config" (OuterVolumeSpecName: "config") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.917510 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.918290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.918424 4736 scope.go:117] "RemoveContainer" containerID="adb59dd9fd022260cdba7dc19c6e034ca0a99277c8227c9297f67a6c41fdb2d5" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.933966 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.933994 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.934005 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.934013 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.934025 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.943775 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.953888 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.962386 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.975158 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config" (OuterVolumeSpecName: "config") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.995110 4736 scope.go:117] "RemoveContainer" containerID="a9bfc0f8ca6f3ebc6202eefa756c843c666f415ef435d726bc5ba29d43affa18" Feb 14 11:02:09 crc kubenswrapper[4736]: I0214 11:02:09.997635 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fba91db3-6b2e-40fa-87dd-9211f5976bec" (UID: "fba91db3-6b2e-40fa-87dd-9211f5976bec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.018264 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2c911912-7053-4fc2-a31e-20bcce081834" (UID: "2c911912-7053-4fc2-a31e-20bcce081834"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.026955 4736 scope.go:117] "RemoveContainer" containerID="1727d8fb96c718f44908d2de2ada62508abb3abbd7310c9038f4f6fda08fd773" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035842 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035871 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035881 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035889 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035899 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fba91db3-6b2e-40fa-87dd-9211f5976bec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.035908 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c911912-7053-4fc2-a31e-20bcce081834-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.055945 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data" (OuterVolumeSpecName: "config-data") pod "d1366cbc-87c7-41ec-9baf-647cfdb2add9" (UID: "d1366cbc-87c7-41ec-9baf-647cfdb2add9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.067453 4736 scope.go:117] "RemoveContainer" containerID="7cc377b8d9123e223ac085493dd585a2ba8187734ac224f8bb152744bb265e25" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.139062 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1366cbc-87c7-41ec-9baf-647cfdb2add9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.413599 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.444409 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77f6fd57bc-nlqb5"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.486150 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.549445 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7tmft"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.570015 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.579809 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591159 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591593 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-httpd" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591606 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-httpd" Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591635 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="cinder-scheduler" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591641 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="cinder-scheduler" Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591657 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="init" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591662 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="init" Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591671 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591677 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591690 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="probe" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591695 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="probe" Feb 14 11:02:10 crc kubenswrapper[4736]: E0214 11:02:10.591706 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-api" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591711 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-api" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591924 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="cinder-scheduler" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591932 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-api" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591939 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c911912-7053-4fc2-a31e-20bcce081834" containerName="neutron-httpd" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591953 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" containerName="probe" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.591976 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.592918 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.602703 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.602954 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.624039 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcn6g\" (UniqueName: \"kubernetes.io/projected/13d207cc-8160-449b-8049-04047efb4b20-kube-api-access-wcn6g\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.626065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13d207cc-8160-449b-8049-04047efb4b20-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.626244 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.626358 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.626545 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-scripts\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.626724 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728485 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728787 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-scripts\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcn6g\" (UniqueName: \"kubernetes.io/projected/13d207cc-8160-449b-8049-04047efb4b20-kube-api-access-wcn6g\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728935 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13d207cc-8160-449b-8049-04047efb4b20-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.728958 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.739932 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13d207cc-8160-449b-8049-04047efb4b20-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.757698 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.758091 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-scripts\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.760678 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.776809 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d207cc-8160-449b-8049-04047efb4b20-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.779372 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcn6g\" (UniqueName: \"kubernetes.io/projected/13d207cc-8160-449b-8049-04047efb4b20-kube-api-access-wcn6g\") pod \"cinder-scheduler-0\" (UID: \"13d207cc-8160-449b-8049-04047efb4b20\") " pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.824169 4736 generic.go:334] "Generic (PLEG): container finished" podID="b93585de-a12c-446d-a045-16d74eb6d7db" containerID="e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d" exitCode=2 Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.824197 4736 generic.go:334] "Generic (PLEG): container finished" podID="b93585de-a12c-446d-a045-16d74eb6d7db" containerID="4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1" exitCode=0 Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.824289 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerDied","Data":"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d"} Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.824327 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerDied","Data":"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1"} Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.828051 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.849158 4736 generic.go:334] "Generic (PLEG): container finished" podID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerID="448b6ccadf615f147430eb11f16cfc37c9dfe4820ec6984b16fb18c84b4b7ad2" exitCode=0 Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.849213 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerDied","Data":"448b6ccadf615f147430eb11f16cfc37c9dfe4820ec6984b16fb18c84b4b7ad2"} Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.849240 4736 scope.go:117] "RemoveContainer" containerID="448b6ccadf615f147430eb11f16cfc37c9dfe4820ec6984b16fb18c84b4b7ad2" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.871995 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32441c5d-4041-4687-b31b-fb121c4d01a7","Type":"ContainerStarted","Data":"9d3266d862050ed3a05f5ef24552faec3fb132273e063a3e1c65eea9dff5dd07"} Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.889050 4736 scope.go:117] "RemoveContainer" containerID="f15edd923a2b1dac96621e7a1ff5396326986364cdad085becaffe3c6cd1d1cd" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.938241 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th8cg\" (UniqueName: \"kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg\") pod \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.938306 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle\") pod \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.938346 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom\") pod \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.938478 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data\") pod \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.938521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs\") pod \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\" (UID: \"02f71999-ab64-4b39-a9d6-48cb41d6b9b1\") " Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.939512 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs" (OuterVolumeSpecName: "logs") pod "02f71999-ab64-4b39-a9d6-48cb41d6b9b1" (UID: "02f71999-ab64-4b39-a9d6-48cb41d6b9b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.950604 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.950946 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "02f71999-ab64-4b39-a9d6-48cb41d6b9b1" (UID: "02f71999-ab64-4b39-a9d6-48cb41d6b9b1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:10 crc kubenswrapper[4736]: I0214 11:02:10.958969 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg" (OuterVolumeSpecName: "kube-api-access-th8cg") pod "02f71999-ab64-4b39-a9d6-48cb41d6b9b1" (UID: "02f71999-ab64-4b39-a9d6-48cb41d6b9b1"). InnerVolumeSpecName "kube-api-access-th8cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.026946 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f71999-ab64-4b39-a9d6-48cb41d6b9b1" (UID: "02f71999-ab64-4b39-a9d6-48cb41d6b9b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.040955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data" (OuterVolumeSpecName: "config-data") pod "02f71999-ab64-4b39-a9d6-48cb41d6b9b1" (UID: "02f71999-ab64-4b39-a9d6-48cb41d6b9b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.046278 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th8cg\" (UniqueName: \"kubernetes.io/projected/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-kube-api-access-th8cg\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.046310 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.046319 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.046328 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.046338 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f71999-ab64-4b39-a9d6-48cb41d6b9b1-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.333866 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 11:02:11 crc kubenswrapper[4736]: E0214 11:02:11.334697 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api-log" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.334788 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api-log" Feb 14 11:02:11 crc kubenswrapper[4736]: E0214 11:02:11.334869 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.334919 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.335142 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api-log" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.335235 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" containerName="barbican-api" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.335833 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.341434 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.341632 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mvh8z" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.341925 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.349530 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.455409 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.457011 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.457059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv54m\" (UniqueName: \"kubernetes.io/projected/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-kube-api-access-mv54m\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.457129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.558807 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.558886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.558905 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv54m\" (UniqueName: \"kubernetes.io/projected/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-kube-api-access-mv54m\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.558945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.563023 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.566737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.567104 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.582429 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv54m\" (UniqueName: \"kubernetes.io/projected/ec5ce106-52f4-4985-a2b9-99266fe3d2d9-kube-api-access-mv54m\") pod \"openstackclient\" (UID: \"ec5ce106-52f4-4985-a2b9-99266fe3d2d9\") " pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.632686 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 11:02:11 crc kubenswrapper[4736]: W0214 11:02:11.639150 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d207cc_8160_449b_8049_04047efb4b20.slice/crio-2cd470e9ef77d6ef057887591c69a868d09f6671f6af6c77f3e9fb3ed26095ed WatchSource:0}: Error finding container 2cd470e9ef77d6ef057887591c69a868d09f6671f6af6c77f3e9fb3ed26095ed: Status 404 returned error can't find the container with id 2cd470e9ef77d6ef057887591c69a868d09f6671f6af6c77f3e9fb3ed26095ed Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.740863 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.906905 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13d207cc-8160-449b-8049-04047efb4b20","Type":"ContainerStarted","Data":"2cd470e9ef77d6ef057887591c69a868d09f6671f6af6c77f3e9fb3ed26095ed"} Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.936841 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5758749df4-tzq2d" event={"ID":"02f71999-ab64-4b39-a9d6-48cb41d6b9b1","Type":"ContainerDied","Data":"1cfa2c678bb87e386f6b2842feef808a69d0eb91e5b2e24664c90e05c54cd448"} Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.936931 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5758749df4-tzq2d" Feb 14 11:02:11 crc kubenswrapper[4736]: I0214 11:02:11.999242 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32441c5d-4041-4687-b31b-fb121c4d01a7","Type":"ContainerStarted","Data":"03e7ce5bbdab6e068975d7da3ab42be44532d8f682bd26956ef2fdc728c2bb57"} Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.013160 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.028271 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5758749df4-tzq2d"] Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.293693 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 11:02:12 crc kubenswrapper[4736]: W0214 11:02:12.311823 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec5ce106_52f4_4985_a2b9_99266fe3d2d9.slice/crio-5415fbba6e5b4a329c9319eda08bd3a350c127d3af08285050148f55c67c90f9 WatchSource:0}: Error finding container 5415fbba6e5b4a329c9319eda08bd3a350c127d3af08285050148f55c67c90f9: Status 404 returned error can't find the container with id 5415fbba6e5b4a329c9319eda08bd3a350c127d3af08285050148f55c67c90f9 Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.407871 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f71999-ab64-4b39-a9d6-48cb41d6b9b1" path="/var/lib/kubelet/pods/02f71999-ab64-4b39-a9d6-48cb41d6b9b1/volumes" Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.408571 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c911912-7053-4fc2-a31e-20bcce081834" path="/var/lib/kubelet/pods/2c911912-7053-4fc2-a31e-20bcce081834/volumes" Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.409117 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1366cbc-87c7-41ec-9baf-647cfdb2add9" path="/var/lib/kubelet/pods/d1366cbc-87c7-41ec-9baf-647cfdb2add9/volumes" Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.410272 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" path="/var/lib/kubelet/pods/fba91db3-6b2e-40fa-87dd-9211f5976bec/volumes" Feb 14 11:02:12 crc kubenswrapper[4736]: I0214 11:02:12.511424 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-7tmft" podUID="fba91db3-6b2e-40fa-87dd-9211f5976bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: i/o timeout" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.048408 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec5ce106-52f4-4985-a2b9-99266fe3d2d9","Type":"ContainerStarted","Data":"5415fbba6e5b4a329c9319eda08bd3a350c127d3af08285050148f55c67c90f9"} Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.070983 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13d207cc-8160-449b-8049-04047efb4b20","Type":"ContainerStarted","Data":"67c83a74aeab7eeb9eae47d6b0e4a47eb538b13560d979604681cfec2b6c8e5f"} Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.102604 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32441c5d-4041-4687-b31b-fb121c4d01a7","Type":"ContainerStarted","Data":"3e63df0a7a2ecbb31f4c0f70cf1c3b8b59885ac4de72c938fc510d454a1214ed"} Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.102852 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.128721 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=16.128706465 podStartE2EDuration="16.128706465s" podCreationTimestamp="2026-02-14 11:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:13.126048507 +0000 UTC m=+1243.494675875" watchObservedRunningTime="2026-02-14 11:02:13.128706465 +0000 UTC m=+1243.497333833" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.593494 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698272 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698415 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698453 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698469 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698503 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698527 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.698552 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2pk\" (UniqueName: \"kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk\") pod \"b93585de-a12c-446d-a045-16d74eb6d7db\" (UID: \"b93585de-a12c-446d-a045-16d74eb6d7db\") " Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.699262 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.699409 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.707509 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk" (OuterVolumeSpecName: "kube-api-access-ws2pk") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "kube-api-access-ws2pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.713927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts" (OuterVolumeSpecName: "scripts") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.742405 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.796261 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data" (OuterVolumeSpecName: "config-data") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800446 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800475 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800484 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800493 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93585de-a12c-446d-a045-16d74eb6d7db-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800501 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.800509 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2pk\" (UniqueName: \"kubernetes.io/projected/b93585de-a12c-446d-a045-16d74eb6d7db-kube-api-access-ws2pk\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.805033 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b93585de-a12c-446d-a045-16d74eb6d7db" (UID: "b93585de-a12c-446d-a045-16d74eb6d7db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:13 crc kubenswrapper[4736]: I0214 11:02:13.902659 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93585de-a12c-446d-a045-16d74eb6d7db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.113574 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"13d207cc-8160-449b-8049-04047efb4b20","Type":"ContainerStarted","Data":"984f2e937f2c42d8e7e77e4836a13125a7ac412b64994b42f1a8f1acc0d0c644"} Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.119345 4736 generic.go:334] "Generic (PLEG): container finished" podID="b93585de-a12c-446d-a045-16d74eb6d7db" containerID="a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a" exitCode=0 Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.120040 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.124482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerDied","Data":"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a"} Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.124545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93585de-a12c-446d-a045-16d74eb6d7db","Type":"ContainerDied","Data":"0754bd8971c04f5265e8ff3627cd335a1da6aadb2fbdbd9352c47bbd044d7c31"} Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.124566 4736 scope.go:117] "RemoveContainer" containerID="e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.140925 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.140898448 podStartE2EDuration="4.140898448s" podCreationTimestamp="2026-02-14 11:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:14.131440862 +0000 UTC m=+1244.500068240" watchObservedRunningTime="2026-02-14 11:02:14.140898448 +0000 UTC m=+1244.509525826" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.160042 4736 scope.go:117] "RemoveContainer" containerID="a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.195263 4736 scope.go:117] "RemoveContainer" containerID="4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.246512 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.259190 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.268964 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.269434 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-notification-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269451 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-notification-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.269466 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="sg-core" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269472 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="sg-core" Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.269482 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-central-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269489 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-central-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269675 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-notification-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269688 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="sg-core" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.269705 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" containerName="ceilometer-central-agent" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.279121 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.279247 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.294014 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.294950 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.353621 4736 scope.go:117] "RemoveContainer" containerID="e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d" Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.357020 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d\": container with ID starting with e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d not found: ID does not exist" containerID="e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.357061 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d"} err="failed to get container status \"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d\": rpc error: code = NotFound desc = could not find container \"e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d\": container with ID starting with e1cafeeafb7682ce50b5138c37665d25057e0b70dec07a08d508a8ab9400d38d not found: ID does not exist" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.357101 4736 scope.go:117] "RemoveContainer" containerID="a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a" Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.358628 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a\": container with ID starting with a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a not found: ID does not exist" containerID="a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.358670 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a"} err="failed to get container status \"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a\": rpc error: code = NotFound desc = could not find container \"a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a\": container with ID starting with a07b07d9cf4f297dceb2eec1486fff7b3dcea3ee18f8d296c823b74e4840d74a not found: ID does not exist" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.358686 4736 scope.go:117] "RemoveContainer" containerID="4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1" Feb 14 11:02:14 crc kubenswrapper[4736]: E0214 11:02:14.362887 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1\": container with ID starting with 4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1 not found: ID does not exist" containerID="4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.362920 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1"} err="failed to get container status \"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1\": rpc error: code = NotFound desc = could not find container \"4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1\": container with ID starting with 4430a4d35f8c4b5c986d05ecd00f1ba5661b9f12b5f0311407f4bc919d012ac1 not found: ID does not exist" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419807 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419849 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6xm\" (UniqueName: \"kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419885 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419907 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419939 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.419973 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.420000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.433845 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93585de-a12c-446d-a045-16d74eb6d7db" path="/var/lib/kubelet/pods/b93585de-a12c-446d-a045-16d74eb6d7db/volumes" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521654 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521726 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521783 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521850 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521880 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6xm\" (UniqueName: \"kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.521940 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.522925 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.523150 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.529064 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.531954 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.532641 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.534303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.562618 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6xm\" (UniqueName: \"kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm\") pod \"ceilometer-0\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " pod="openstack/ceilometer-0" Feb 14 11:02:14 crc kubenswrapper[4736]: I0214 11:02:14.632178 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:15 crc kubenswrapper[4736]: I0214 11:02:15.228933 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:15 crc kubenswrapper[4736]: I0214 11:02:15.951753 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 11:02:16 crc kubenswrapper[4736]: I0214 11:02:16.158051 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerStarted","Data":"fb360ac970aab6c14415294262fe99a19e310ea23c1fd7156439c9952375f288"} Feb 14 11:02:16 crc kubenswrapper[4736]: I0214 11:02:16.158334 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerStarted","Data":"046d058962dc4a0f2510e003b6d79fd93c46834ad8adcae1603f74664df9ee93"} Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.060171 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.063982 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-557678d96b-tqmtc" Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.176821 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerStarted","Data":"5afb51da94af3e50e58854dafd2a0ef44bee9bedf34ab0f2d9fa2961de2313a5"} Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.696259 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.696732 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.696856 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.697515 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:02:17 crc kubenswrapper[4736]: I0214 11:02:17.697634 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a" gracePeriod=600 Feb 14 11:02:18 crc kubenswrapper[4736]: I0214 11:02:18.191601 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a" exitCode=0 Feb 14 11:02:18 crc kubenswrapper[4736]: I0214 11:02:18.191787 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a"} Feb 14 11:02:18 crc kubenswrapper[4736]: I0214 11:02:18.192896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36"} Feb 14 11:02:18 crc kubenswrapper[4736]: I0214 11:02:18.192965 4736 scope.go:117] "RemoveContainer" containerID="0699b94691595822651ec4333c313c55f239b38c83c6b942a3933b33334d5715" Feb 14 11:02:18 crc kubenswrapper[4736]: I0214 11:02:18.196945 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerStarted","Data":"82c13025b6aa9e2ecf812de651b56f8069d8229acefe057517803c0fe6aa21ef"} Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.658582 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6c6f565b75-vzhbj"] Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.664463 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.671508 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.671695 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.671889 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.697003 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c6f565b75-vzhbj"] Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.731763 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdvz\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-kube-api-access-sfdvz\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.731828 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-combined-ca-bundle\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.731868 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-config-data\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.731905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-internal-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.731973 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-etc-swift\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.732030 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-public-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.732076 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-run-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.732124 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-log-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-log-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833782 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdvz\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-kube-api-access-sfdvz\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833809 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-combined-ca-bundle\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833838 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-config-data\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833867 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-internal-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-etc-swift\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-public-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.833987 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-run-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.834201 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-log-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.834417 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c072889-cf21-4f12-a6eb-14fe8409b860-run-httpd\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.844320 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-etc-swift\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.859991 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-public-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.860599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-config-data\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.860699 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-combined-ca-bundle\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.861027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c072889-cf21-4f12-a6eb-14fe8409b860-internal-tls-certs\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.874369 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdvz\" (UniqueName: \"kubernetes.io/projected/6c072889-cf21-4f12-a6eb-14fe8409b860-kube-api-access-sfdvz\") pod \"swift-proxy-6c6f565b75-vzhbj\" (UID: \"6c072889-cf21-4f12-a6eb-14fe8409b860\") " pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:19 crc kubenswrapper[4736]: I0214 11:02:19.979844 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:20 crc kubenswrapper[4736]: I0214 11:02:20.229052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerStarted","Data":"370939c01340063c0732664d1a8f1c1c092551aee1d863e1b529692ad4bd321f"} Feb 14 11:02:20 crc kubenswrapper[4736]: I0214 11:02:20.229477 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:02:20 crc kubenswrapper[4736]: I0214 11:02:20.266242 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.601337102 podStartE2EDuration="6.266222801s" podCreationTimestamp="2026-02-14 11:02:14 +0000 UTC" firstStartedPulling="2026-02-14 11:02:15.27430355 +0000 UTC m=+1245.642930918" lastFinishedPulling="2026-02-14 11:02:18.939189249 +0000 UTC m=+1249.307816617" observedRunningTime="2026-02-14 11:02:20.255899269 +0000 UTC m=+1250.624526637" watchObservedRunningTime="2026-02-14 11:02:20.266222801 +0000 UTC m=+1250.634850159" Feb 14 11:02:20 crc kubenswrapper[4736]: I0214 11:02:20.639963 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c6f565b75-vzhbj"] Feb 14 11:02:21 crc kubenswrapper[4736]: I0214 11:02:21.454152 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 11:02:21 crc kubenswrapper[4736]: I0214 11:02:21.744919 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 11:02:22 crc kubenswrapper[4736]: I0214 11:02:22.733007 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:22 crc kubenswrapper[4736]: I0214 11:02:22.733332 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-central-agent" containerID="cri-o://fb360ac970aab6c14415294262fe99a19e310ea23c1fd7156439c9952375f288" gracePeriod=30 Feb 14 11:02:22 crc kubenswrapper[4736]: I0214 11:02:22.733404 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="proxy-httpd" containerID="cri-o://370939c01340063c0732664d1a8f1c1c092551aee1d863e1b529692ad4bd321f" gracePeriod=30 Feb 14 11:02:22 crc kubenswrapper[4736]: I0214 11:02:22.733423 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-notification-agent" containerID="cri-o://5afb51da94af3e50e58854dafd2a0ef44bee9bedf34ab0f2d9fa2961de2313a5" gracePeriod=30 Feb 14 11:02:22 crc kubenswrapper[4736]: I0214 11:02:22.733582 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="sg-core" containerID="cri-o://82c13025b6aa9e2ecf812de651b56f8069d8229acefe057517803c0fe6aa21ef" gracePeriod=30 Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.262611 4736 generic.go:334] "Generic (PLEG): container finished" podID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerID="370939c01340063c0732664d1a8f1c1c092551aee1d863e1b529692ad4bd321f" exitCode=0 Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.262969 4736 generic.go:334] "Generic (PLEG): container finished" podID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerID="82c13025b6aa9e2ecf812de651b56f8069d8229acefe057517803c0fe6aa21ef" exitCode=2 Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.262928 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerDied","Data":"370939c01340063c0732664d1a8f1c1c092551aee1d863e1b529692ad4bd321f"} Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.263022 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerDied","Data":"82c13025b6aa9e2ecf812de651b56f8069d8229acefe057517803c0fe6aa21ef"} Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.843114 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7979c77cb9-ql2gq" Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.922055 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.922658 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f99c476c6-hk87j" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-api" containerID="cri-o://3527ccb536661d475b9d882531c095b83134fbe1905f6c450532ec8e7d30574f" gracePeriod=30 Feb 14 11:02:23 crc kubenswrapper[4736]: I0214 11:02:23.923119 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f99c476c6-hk87j" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-httpd" containerID="cri-o://69b14602eb8cc9ee1f5cb103ba79444bff1af9d6df7f63ef16e65394c8e79c69" gracePeriod=30 Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.292247 4736 generic.go:334] "Generic (PLEG): container finished" podID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerID="69b14602eb8cc9ee1f5cb103ba79444bff1af9d6df7f63ef16e65394c8e79c69" exitCode=0 Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.292318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerDied","Data":"69b14602eb8cc9ee1f5cb103ba79444bff1af9d6df7f63ef16e65394c8e79c69"} Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.321252 4736 generic.go:334] "Generic (PLEG): container finished" podID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerID="5afb51da94af3e50e58854dafd2a0ef44bee9bedf34ab0f2d9fa2961de2313a5" exitCode=0 Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.321281 4736 generic.go:334] "Generic (PLEG): container finished" podID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerID="fb360ac970aab6c14415294262fe99a19e310ea23c1fd7156439c9952375f288" exitCode=0 Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.321298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerDied","Data":"5afb51da94af3e50e58854dafd2a0ef44bee9bedf34ab0f2d9fa2961de2313a5"} Feb 14 11:02:24 crc kubenswrapper[4736]: I0214 11:02:24.321321 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerDied","Data":"fb360ac970aab6c14415294262fe99a19e310ea23c1fd7156439c9952375f288"} Feb 14 11:02:27 crc kubenswrapper[4736]: I0214 11:02:27.349464 4736 generic.go:334] "Generic (PLEG): container finished" podID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerID="3527ccb536661d475b9d882531c095b83134fbe1905f6c450532ec8e7d30574f" exitCode=0 Feb 14 11:02:27 crc kubenswrapper[4736]: I0214 11:02:27.349538 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerDied","Data":"3527ccb536661d475b9d882531c095b83134fbe1905f6c450532ec8e7d30574f"} Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.404020 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6f565b75-vzhbj" event={"ID":"6c072889-cf21-4f12-a6eb-14fe8409b860","Type":"ContainerStarted","Data":"d4f9c0688ab39df7c97fcdf7b7b450f3d1b2feb34548a991d9d218335a69bea3"} Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.511589 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.522340 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.522398 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss6xm\" (UniqueName: \"kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.522540 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.522596 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.522654 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.523682 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.523781 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.524085 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.524251 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data\") pod \"2054df2b-1756-45d4-a3c7-6c2970b508fd\" (UID: \"2054df2b-1756-45d4-a3c7-6c2970b508fd\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.524712 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.524728 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2054df2b-1756-45d4-a3c7-6c2970b508fd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.580024 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts" (OuterVolumeSpecName: "scripts") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.587136 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm" (OuterVolumeSpecName: "kube-api-access-ss6xm") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "kube-api-access-ss6xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.597048 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.625913 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.625943 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.625953 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss6xm\" (UniqueName: \"kubernetes.io/projected/2054df2b-1756-45d4-a3c7-6c2970b508fd-kube-api-access-ss6xm\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.628065 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.677808 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.702372 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data" (OuterVolumeSpecName: "config-data") pod "2054df2b-1756-45d4-a3c7-6c2970b508fd" (UID: "2054df2b-1756-45d4-a3c7-6c2970b508fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.726784 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle\") pod \"8abb8167-96bf-4ea8-8613-549c33aa15e6\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.726977 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxs26\" (UniqueName: \"kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26\") pod \"8abb8167-96bf-4ea8-8613-549c33aa15e6\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.727066 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs\") pod \"8abb8167-96bf-4ea8-8613-549c33aa15e6\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.727256 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config\") pod \"8abb8167-96bf-4ea8-8613-549c33aa15e6\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.727356 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config\") pod \"8abb8167-96bf-4ea8-8613-549c33aa15e6\" (UID: \"8abb8167-96bf-4ea8-8613-549c33aa15e6\") " Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.727795 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.727865 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2054df2b-1756-45d4-a3c7-6c2970b508fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.736160 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26" (OuterVolumeSpecName: "kube-api-access-nxs26") pod "8abb8167-96bf-4ea8-8613-549c33aa15e6" (UID: "8abb8167-96bf-4ea8-8613-549c33aa15e6"). InnerVolumeSpecName "kube-api-access-nxs26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.736164 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8abb8167-96bf-4ea8-8613-549c33aa15e6" (UID: "8abb8167-96bf-4ea8-8613-549c33aa15e6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.770855 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config" (OuterVolumeSpecName: "config") pod "8abb8167-96bf-4ea8-8613-549c33aa15e6" (UID: "8abb8167-96bf-4ea8-8613-549c33aa15e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.804452 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8abb8167-96bf-4ea8-8613-549c33aa15e6" (UID: "8abb8167-96bf-4ea8-8613-549c33aa15e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.814947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8abb8167-96bf-4ea8-8613-549c33aa15e6" (UID: "8abb8167-96bf-4ea8-8613-549c33aa15e6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.829497 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.829533 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.829543 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxs26\" (UniqueName: \"kubernetes.io/projected/8abb8167-96bf-4ea8-8613-549c33aa15e6-kube-api-access-nxs26\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.829552 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:29 crc kubenswrapper[4736]: I0214 11:02:29.829562 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8abb8167-96bf-4ea8-8613-549c33aa15e6-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.441584 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec5ce106-52f4-4985-a2b9-99266fe3d2d9","Type":"ContainerStarted","Data":"e172055a33d53a5ce863ffab2f7c0913a294ba4d6c34e937c601613be9a13b60"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.472164 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.633160748 podStartE2EDuration="19.472149635s" podCreationTimestamp="2026-02-14 11:02:11 +0000 UTC" firstStartedPulling="2026-02-14 11:02:12.315390696 +0000 UTC m=+1242.684018064" lastFinishedPulling="2026-02-14 11:02:29.154379583 +0000 UTC m=+1259.523006951" observedRunningTime="2026-02-14 11:02:30.465526092 +0000 UTC m=+1260.834153470" watchObservedRunningTime="2026-02-14 11:02:30.472149635 +0000 UTC m=+1260.840777003" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.472578 4736 generic.go:334] "Generic (PLEG): container finished" podID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerID="e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad" exitCode=137 Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.472632 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerDied","Data":"e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.485126 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6f565b75-vzhbj" event={"ID":"6c072889-cf21-4f12-a6eb-14fe8409b860","Type":"ContainerStarted","Data":"4c60da09a8fb32f5780e514935825c08c17ece7360ec070af455c5cd0452bd1d"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.485173 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6f565b75-vzhbj" event={"ID":"6c072889-cf21-4f12-a6eb-14fe8409b860","Type":"ContainerStarted","Data":"a88fc16719d6ccc37e21e6922c844967e1c128c6bbc2e85ef4e92a28168727da"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.485623 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.486025 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.488153 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f99c476c6-hk87j" event={"ID":"8abb8167-96bf-4ea8-8613-549c33aa15e6","Type":"ContainerDied","Data":"f83d4eabb1e980d1a25040255631b59172e90cd09778d81288db7dd14ea698f1"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.488287 4736 scope.go:117] "RemoveContainer" containerID="69b14602eb8cc9ee1f5cb103ba79444bff1af9d6df7f63ef16e65394c8e79c69" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.488168 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f99c476c6-hk87j" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.513322 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2054df2b-1756-45d4-a3c7-6c2970b508fd","Type":"ContainerDied","Data":"046d058962dc4a0f2510e003b6d79fd93c46834ad8adcae1603f74664df9ee93"} Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.513418 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.515903 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6c6f565b75-vzhbj" podStartSLOduration=11.515886041 podStartE2EDuration="11.515886041s" podCreationTimestamp="2026-02-14 11:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:30.511155443 +0000 UTC m=+1260.879782831" watchObservedRunningTime="2026-02-14 11:02:30.515886041 +0000 UTC m=+1260.884513409" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.604287 4736 scope.go:117] "RemoveContainer" containerID="3527ccb536661d475b9d882531c095b83134fbe1905f6c450532ec8e7d30574f" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.678873 4736 scope.go:117] "RemoveContainer" containerID="370939c01340063c0732664d1a8f1c1c092551aee1d863e1b529692ad4bd321f" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.682195 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.690185 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f99c476c6-hk87j"] Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.706123 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.713588 4736 scope.go:117] "RemoveContainer" containerID="82c13025b6aa9e2ecf812de651b56f8069d8229acefe057517803c0fe6aa21ef" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.719457 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.729865 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730272 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-central-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730283 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-central-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730298 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="proxy-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730304 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="proxy-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730319 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-notification-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730325 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-notification-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730351 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730356 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730364 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-api" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730370 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-api" Feb 14 11:02:30 crc kubenswrapper[4736]: E0214 11:02:30.730381 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="sg-core" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730386 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="sg-core" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730547 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730560 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-central-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730576 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" containerName="neutron-api" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730588 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="ceilometer-notification-agent" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730599 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="sg-core" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.730610 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" containerName="proxy-httpd" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.732806 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.735010 4736 scope.go:117] "RemoveContainer" containerID="5afb51da94af3e50e58854dafd2a0ef44bee9bedf34ab0f2d9fa2961de2313a5" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.735187 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.735364 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.739863 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.756087 4736 scope.go:117] "RemoveContainer" containerID="fb360ac970aab6c14415294262fe99a19e310ea23c1fd7156439c9952375f288" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847493 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847775 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94r5g\" (UniqueName: \"kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.847815 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.848041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.949968 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950107 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950134 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950150 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950192 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94r5g\" (UniqueName: \"kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950211 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950874 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.950983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.958585 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.972165 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.973133 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.975242 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94r5g\" (UniqueName: \"kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:30 crc kubenswrapper[4736]: I0214 11:02:30.976664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data\") pod \"ceilometer-0\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " pod="openstack/ceilometer-0" Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.056995 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.564716 4736 generic.go:334] "Generic (PLEG): container finished" podID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerID="04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b" exitCode=137 Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.564834 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerDied","Data":"04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b"} Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.564862 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerStarted","Data":"d75c8d7443e295d15b6b896b7f6edfc518815583b7203ab9204c009d97e150d1"} Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.569901 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerStarted","Data":"addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b"} Feb 14 11:02:31 crc kubenswrapper[4736]: I0214 11:02:31.663445 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.427402 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2054df2b-1756-45d4-a3c7-6c2970b508fd" path="/var/lib/kubelet/pods/2054df2b-1756-45d4-a3c7-6c2970b508fd/volumes" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.430054 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abb8167-96bf-4ea8-8613-549c33aa15e6" path="/var/lib/kubelet/pods/8abb8167-96bf-4ea8-8613-549c33aa15e6/volumes" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.469534 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-drfv5"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.470564 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.494716 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-drfv5"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.510859 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.510901 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzd6z\" (UniqueName: \"kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.558970 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4dv5t"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.560201 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.569824 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ef43-account-create-update-q9clt"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.570892 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.573247 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.585517 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4dv5t"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.600568 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ef43-account-create-update-q9clt"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.611819 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.612086 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.612167 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.612247 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzd6z\" (UniqueName: \"kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.612359 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5dgd\" (UniqueName: \"kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.612426 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdfs\" (UniqueName: \"kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.613256 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.617467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerStarted","Data":"a36ea8bcb3d78671aac2d62c665634f01f171cc1a2a09cfa72e12bac72c7a6e5"} Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.650759 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzd6z\" (UniqueName: \"kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z\") pod \"nova-api-db-create-drfv5\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.715709 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdfs\" (UniqueName: \"kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.715762 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5dgd\" (UniqueName: \"kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.715854 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.715950 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.716852 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.717160 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.746546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdfs\" (UniqueName: \"kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs\") pod \"nova-cell0-db-create-4dv5t\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.749900 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kfzf9"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.751063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.756770 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5dgd\" (UniqueName: \"kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd\") pod \"nova-api-ef43-account-create-update-q9clt\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.771146 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3b5f-account-create-update-xq5sl"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.772799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.774578 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.788653 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.788976 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kfzf9"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.804681 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b5f-account-create-update-xq5sl"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.818560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.820283 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpj52\" (UniqueName: \"kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.820384 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67b5\" (UniqueName: \"kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.820567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.882450 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.894867 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.938212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.938269 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.938303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpj52\" (UniqueName: \"kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.938371 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67b5\" (UniqueName: \"kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.939228 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.939577 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.971562 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67b5\" (UniqueName: \"kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5\") pod \"nova-cell0-3b5f-account-create-update-xq5sl\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.973249 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpj52\" (UniqueName: \"kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52\") pod \"nova-cell1-db-create-kfzf9\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.997866 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0fc8-account-create-update-drkqr"] Feb 14 11:02:32 crc kubenswrapper[4736]: I0214 11:02:32.999392 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.002697 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.029623 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0fc8-account-create-update-drkqr"] Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.104784 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.132808 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.143020 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.143154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdct9\" (UniqueName: \"kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.250826 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdct9\" (UniqueName: \"kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.250921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.283738 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.300397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdct9\" (UniqueName: \"kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9\") pod \"nova-cell1-0fc8-account-create-update-drkqr\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.322480 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.398783 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-drfv5"] Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.634178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerStarted","Data":"4ca0a8b330420a48dd960b2467481af015af89e5e6928381e0f67bba2a5472fa"} Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.647528 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-drfv5" event={"ID":"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7","Type":"ContainerStarted","Data":"6c02981a4cc2e293057d590afe415cd00063a32a21414be80f9e6a99ab3a892d"} Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.729642 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4dv5t"] Feb 14 11:02:33 crc kubenswrapper[4736]: W0214 11:02:33.752609 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6391df09_ebfc_4e13_85b4_5aab4c8eefb0.slice/crio-5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811 WatchSource:0}: Error finding container 5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811: Status 404 returned error can't find the container with id 5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811 Feb 14 11:02:33 crc kubenswrapper[4736]: I0214 11:02:33.874538 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ef43-account-create-update-q9clt"] Feb 14 11:02:33 crc kubenswrapper[4736]: W0214 11:02:33.887142 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd379fd3_2737_47ba_9f0f_59b46e24fed6.slice/crio-b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e WatchSource:0}: Error finding container b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e: Status 404 returned error can't find the container with id b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e Feb 14 11:02:34 crc kubenswrapper[4736]: I0214 11:02:34.006721 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kfzf9"] Feb 14 11:02:34 crc kubenswrapper[4736]: I0214 11:02:34.280936 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b5f-account-create-update-xq5sl"] Feb 14 11:02:34 crc kubenswrapper[4736]: I0214 11:02:34.655492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef43-account-create-update-q9clt" event={"ID":"dd379fd3-2737-47ba-9f0f-59b46e24fed6","Type":"ContainerStarted","Data":"b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e"} Feb 14 11:02:34 crc kubenswrapper[4736]: I0214 11:02:34.657007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4dv5t" event={"ID":"6391df09-ebfc-4e13-85b4-5aab4c8eefb0","Type":"ContainerStarted","Data":"5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:34.994560 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0fc8-account-create-update-drkqr"] Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.019168 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.036520 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c6f565b75-vzhbj" Feb 14 11:02:35 crc kubenswrapper[4736]: W0214 11:02:35.100483 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1294507_4b94_4f46_91b3_7a3dffdd7494.slice/crio-8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c WatchSource:0}: Error finding container 8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c: Status 404 returned error can't find the container with id 8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.683443 4736 generic.go:334] "Generic (PLEG): container finished" podID="4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" containerID="6d47a22858d8121410f4f25fa8c5b7e4f86cd7a59a421eb7e72f0a1a608bcb76" exitCode=0 Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.683822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-drfv5" event={"ID":"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7","Type":"ContainerDied","Data":"6d47a22858d8121410f4f25fa8c5b7e4f86cd7a59a421eb7e72f0a1a608bcb76"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.689185 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfzf9" event={"ID":"e4cd324a-4e67-4132-ad44-0991435b9291","Type":"ContainerStarted","Data":"37b4277df1c2d40318be63f7d4ca6bf3a77664767f75aaadb125e96c49d6d047"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.689214 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfzf9" event={"ID":"e4cd324a-4e67-4132-ad44-0991435b9291","Type":"ContainerStarted","Data":"52808b164cfb48b9e5d17f143a8d843d78ef0605b272dcdc6ea3ac314c45ff62"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.691969 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" event={"ID":"a1294507-4b94-4f46-91b3-7a3dffdd7494","Type":"ContainerStarted","Data":"476068fc5c3ba6780fd2e84fd1e5a53d75ccf3ae1d538b96c0040cb86e005803"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.692001 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" event={"ID":"a1294507-4b94-4f46-91b3-7a3dffdd7494","Type":"ContainerStarted","Data":"8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.701335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef43-account-create-update-q9clt" event={"ID":"dd379fd3-2737-47ba-9f0f-59b46e24fed6","Type":"ContainerStarted","Data":"ecd623212845ee1bcd7da3c0bd24305a983bf9cd64e3156c8d9e396ccd596c17"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.707660 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" event={"ID":"7910764c-65b4-4645-9d60-c25cbea434d5","Type":"ContainerStarted","Data":"1fa65ef887f7ac125daafaa07a682f6f00f69593f65153a2c311b1a55630ad30"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.707710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" event={"ID":"7910764c-65b4-4645-9d60-c25cbea434d5","Type":"ContainerStarted","Data":"e43e95735d4d3c1570528dd8606a707c9ba8a1491ca1e73328bbb9a1152fb593"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.726004 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" podStartSLOduration=3.725988431 podStartE2EDuration="3.725988431s" podCreationTimestamp="2026-02-14 11:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:35.724067035 +0000 UTC m=+1266.092694403" watchObservedRunningTime="2026-02-14 11:02:35.725988431 +0000 UTC m=+1266.094615799" Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.740246 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerStarted","Data":"d432e17144570b1c5cf3f57f24ed4af4ddca06b04fc3e4a36b008cc650f3be63"} Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.785354 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-kfzf9" podStartSLOduration=3.785336494 podStartE2EDuration="3.785336494s" podCreationTimestamp="2026-02-14 11:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:35.745089059 +0000 UTC m=+1266.113716427" watchObservedRunningTime="2026-02-14 11:02:35.785336494 +0000 UTC m=+1266.153963862" Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.800567 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ef43-account-create-update-q9clt" podStartSLOduration=3.800549338 podStartE2EDuration="3.800549338s" podCreationTimestamp="2026-02-14 11:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:35.772631383 +0000 UTC m=+1266.141258751" watchObservedRunningTime="2026-02-14 11:02:35.800549338 +0000 UTC m=+1266.169176706" Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.849176 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-4dv5t" podStartSLOduration=3.849155926 podStartE2EDuration="3.849155926s" podCreationTimestamp="2026-02-14 11:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:35.809992103 +0000 UTC m=+1266.178619471" watchObservedRunningTime="2026-02-14 11:02:35.849155926 +0000 UTC m=+1266.217783294" Feb 14 11:02:35 crc kubenswrapper[4736]: I0214 11:02:35.852388 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" podStartSLOduration=3.85237944 podStartE2EDuration="3.85237944s" podCreationTimestamp="2026-02-14 11:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:35.83557268 +0000 UTC m=+1266.204200058" watchObservedRunningTime="2026-02-14 11:02:35.85237944 +0000 UTC m=+1266.221006808" Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.164855 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.165095 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-log" containerID="cri-o://286886288e8293a9f9f716288452cc220414a81c7aec0336bf085ca9099af496" gracePeriod=30 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.165215 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-httpd" containerID="cri-o://59fe292ef949872f6b06cf85ed7bb72db8b4e9c599d63fb6c70aa86f92eaa601" gracePeriod=30 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.749724 4736 generic.go:334] "Generic (PLEG): container finished" podID="e4cd324a-4e67-4132-ad44-0991435b9291" containerID="37b4277df1c2d40318be63f7d4ca6bf3a77664767f75aaadb125e96c49d6d047" exitCode=0 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.749867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfzf9" event={"ID":"e4cd324a-4e67-4132-ad44-0991435b9291","Type":"ContainerDied","Data":"37b4277df1c2d40318be63f7d4ca6bf3a77664767f75aaadb125e96c49d6d047"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.752169 4736 generic.go:334] "Generic (PLEG): container finished" podID="a1294507-4b94-4f46-91b3-7a3dffdd7494" containerID="476068fc5c3ba6780fd2e84fd1e5a53d75ccf3ae1d538b96c0040cb86e005803" exitCode=0 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.752208 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" event={"ID":"a1294507-4b94-4f46-91b3-7a3dffdd7494","Type":"ContainerDied","Data":"476068fc5c3ba6780fd2e84fd1e5a53d75ccf3ae1d538b96c0040cb86e005803"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.753481 4736 generic.go:334] "Generic (PLEG): container finished" podID="dd379fd3-2737-47ba-9f0f-59b46e24fed6" containerID="ecd623212845ee1bcd7da3c0bd24305a983bf9cd64e3156c8d9e396ccd596c17" exitCode=0 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.753547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef43-account-create-update-q9clt" event={"ID":"dd379fd3-2737-47ba-9f0f-59b46e24fed6","Type":"ContainerDied","Data":"ecd623212845ee1bcd7da3c0bd24305a983bf9cd64e3156c8d9e396ccd596c17"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.754542 4736 generic.go:334] "Generic (PLEG): container finished" podID="7910764c-65b4-4645-9d60-c25cbea434d5" containerID="1fa65ef887f7ac125daafaa07a682f6f00f69593f65153a2c311b1a55630ad30" exitCode=0 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.754651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" event={"ID":"7910764c-65b4-4645-9d60-c25cbea434d5","Type":"ContainerDied","Data":"1fa65ef887f7ac125daafaa07a682f6f00f69593f65153a2c311b1a55630ad30"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.755922 4736 generic.go:334] "Generic (PLEG): container finished" podID="6391df09-ebfc-4e13-85b4-5aab4c8eefb0" containerID="03c399c40c53903582932d8742b556ba9cccaddc6083390f35702fdf86bedd85" exitCode=0 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.756034 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4dv5t" event={"ID":"6391df09-ebfc-4e13-85b4-5aab4c8eefb0","Type":"ContainerDied","Data":"03c399c40c53903582932d8742b556ba9cccaddc6083390f35702fdf86bedd85"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.757723 4736 generic.go:334] "Generic (PLEG): container finished" podID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerID="286886288e8293a9f9f716288452cc220414a81c7aec0336bf085ca9099af496" exitCode=143 Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.757823 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerDied","Data":"286886288e8293a9f9f716288452cc220414a81c7aec0336bf085ca9099af496"} Feb 14 11:02:36 crc kubenswrapper[4736]: I0214 11:02:36.761851 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerStarted","Data":"93a2095aa8473768918276273a107366bc441bc774074ab732f674321e6db153"} Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.170175 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.304563 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzd6z\" (UniqueName: \"kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z\") pod \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.305198 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts\") pod \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\" (UID: \"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7\") " Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.305976 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" (UID: "4a9d691f-bb0e-42be-b3f9-9cfd979d4de7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.329144 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z" (OuterVolumeSpecName: "kube-api-access-xzd6z") pod "4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" (UID: "4a9d691f-bb0e-42be-b3f9-9cfd979d4de7"). InnerVolumeSpecName "kube-api-access-xzd6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.407632 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzd6z\" (UniqueName: \"kubernetes.io/projected/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-kube-api-access-xzd6z\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.407900 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.770639 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-drfv5" Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.771413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-drfv5" event={"ID":"4a9d691f-bb0e-42be-b3f9-9cfd979d4de7","Type":"ContainerDied","Data":"6c02981a4cc2e293057d590afe415cd00063a32a21414be80f9e6a99ab3a892d"} Feb 14 11:02:37 crc kubenswrapper[4736]: I0214 11:02:37.771664 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c02981a4cc2e293057d590afe415cd00063a32a21414be80f9e6a99ab3a892d" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.144132 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.231046 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5dgd\" (UniqueName: \"kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd\") pod \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.231407 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts\") pod \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\" (UID: \"dd379fd3-2737-47ba-9f0f-59b46e24fed6\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.231985 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd379fd3-2737-47ba-9f0f-59b46e24fed6" (UID: "dd379fd3-2737-47ba-9f0f-59b46e24fed6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.238012 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd" (OuterVolumeSpecName: "kube-api-access-m5dgd") pod "dd379fd3-2737-47ba-9f0f-59b46e24fed6" (UID: "dd379fd3-2737-47ba-9f0f-59b46e24fed6"). InnerVolumeSpecName "kube-api-access-m5dgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.333097 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5dgd\" (UniqueName: \"kubernetes.io/projected/dd379fd3-2737-47ba-9f0f-59b46e24fed6-kube-api-access-m5dgd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.333137 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd379fd3-2737-47ba-9f0f-59b46e24fed6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.390512 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.434510 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts\") pod \"e4cd324a-4e67-4132-ad44-0991435b9291\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.434551 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpj52\" (UniqueName: \"kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52\") pod \"e4cd324a-4e67-4132-ad44-0991435b9291\" (UID: \"e4cd324a-4e67-4132-ad44-0991435b9291\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.436180 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4cd324a-4e67-4132-ad44-0991435b9291" (UID: "e4cd324a-4e67-4132-ad44-0991435b9291"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.440953 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52" (OuterVolumeSpecName: "kube-api-access-qpj52") pod "e4cd324a-4e67-4132-ad44-0991435b9291" (UID: "e4cd324a-4e67-4132-ad44-0991435b9291"). InnerVolumeSpecName "kube-api-access-qpj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.536284 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4cd324a-4e67-4132-ad44-0991435b9291-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.536310 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpj52\" (UniqueName: \"kubernetes.io/projected/e4cd324a-4e67-4132-ad44-0991435b9291-kube-api-access-qpj52\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.631168 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.638609 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.642282 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.739857 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts\") pod \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.739902 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts\") pod \"7910764c-65b4-4645-9d60-c25cbea434d5\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.739924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdfs\" (UniqueName: \"kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs\") pod \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\" (UID: \"6391df09-ebfc-4e13-85b4-5aab4c8eefb0\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.739977 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdct9\" (UniqueName: \"kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9\") pod \"a1294507-4b94-4f46-91b3-7a3dffdd7494\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.739999 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k67b5\" (UniqueName: \"kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5\") pod \"7910764c-65b4-4645-9d60-c25cbea434d5\" (UID: \"7910764c-65b4-4645-9d60-c25cbea434d5\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740066 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts\") pod \"a1294507-4b94-4f46-91b3-7a3dffdd7494\" (UID: \"a1294507-4b94-4f46-91b3-7a3dffdd7494\") " Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740332 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6391df09-ebfc-4e13-85b4-5aab4c8eefb0" (UID: "6391df09-ebfc-4e13-85b4-5aab4c8eefb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7910764c-65b4-4645-9d60-c25cbea434d5" (UID: "7910764c-65b4-4645-9d60-c25cbea434d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740471 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740497 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7910764c-65b4-4645-9d60-c25cbea434d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.740911 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1294507-4b94-4f46-91b3-7a3dffdd7494" (UID: "a1294507-4b94-4f46-91b3-7a3dffdd7494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.745340 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5" (OuterVolumeSpecName: "kube-api-access-k67b5") pod "7910764c-65b4-4645-9d60-c25cbea434d5" (UID: "7910764c-65b4-4645-9d60-c25cbea434d5"). InnerVolumeSpecName "kube-api-access-k67b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.750483 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9" (OuterVolumeSpecName: "kube-api-access-hdct9") pod "a1294507-4b94-4f46-91b3-7a3dffdd7494" (UID: "a1294507-4b94-4f46-91b3-7a3dffdd7494"). InnerVolumeSpecName "kube-api-access-hdct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.760031 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs" (OuterVolumeSpecName: "kube-api-access-hbdfs") pod "6391df09-ebfc-4e13-85b4-5aab4c8eefb0" (UID: "6391df09-ebfc-4e13-85b4-5aab4c8eefb0"). InnerVolumeSpecName "kube-api-access-hbdfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.810669 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4dv5t" event={"ID":"6391df09-ebfc-4e13-85b4-5aab4c8eefb0","Type":"ContainerDied","Data":"5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.810705 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8197bc4549adeeb6420c9d221e9dc0113601dfb8880b790c9121097e64a811" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.810755 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4dv5t" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.825190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerStarted","Data":"c1d10b5d13751ae13a633f6d3764b7dc008eaa65010c73c9f48ceb582a556fbd"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.825549 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.838516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kfzf9" event={"ID":"e4cd324a-4e67-4132-ad44-0991435b9291","Type":"ContainerDied","Data":"52808b164cfb48b9e5d17f143a8d843d78ef0605b272dcdc6ea3ac314c45ff62"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.838574 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52808b164cfb48b9e5d17f143a8d843d78ef0605b272dcdc6ea3ac314c45ff62" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.838661 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kfzf9" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.842928 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" event={"ID":"a1294507-4b94-4f46-91b3-7a3dffdd7494","Type":"ContainerDied","Data":"8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.842962 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c79761fa3a0becd9b35727116cee89d630add083c721634ac12729d497f796c" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.843018 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0fc8-account-create-update-drkqr" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.843409 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1294507-4b94-4f46-91b3-7a3dffdd7494-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.846109 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbdfs\" (UniqueName: \"kubernetes.io/projected/6391df09-ebfc-4e13-85b4-5aab4c8eefb0-kube-api-access-hbdfs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.846146 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdct9\" (UniqueName: \"kubernetes.io/projected/a1294507-4b94-4f46-91b3-7a3dffdd7494-kube-api-access-hdct9\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.846175 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k67b5\" (UniqueName: \"kubernetes.io/projected/7910764c-65b4-4645-9d60-c25cbea434d5-kube-api-access-k67b5\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.859831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ef43-account-create-update-q9clt" event={"ID":"dd379fd3-2737-47ba-9f0f-59b46e24fed6","Type":"ContainerDied","Data":"b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.859931 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c3bb68941eca9d5052a9177bfa19bee50a9f4ada269da6ad26e0079faa2d0e" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.859996 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ef43-account-create-update-q9clt" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.869373 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.648415636 podStartE2EDuration="8.869352508s" podCreationTimestamp="2026-02-14 11:02:30 +0000 UTC" firstStartedPulling="2026-02-14 11:02:31.668778412 +0000 UTC m=+1262.037405780" lastFinishedPulling="2026-02-14 11:02:37.889715284 +0000 UTC m=+1268.258342652" observedRunningTime="2026-02-14 11:02:38.855989798 +0000 UTC m=+1269.224617166" watchObservedRunningTime="2026-02-14 11:02:38.869352508 +0000 UTC m=+1269.237979876" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.870322 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" event={"ID":"7910764c-65b4-4645-9d60-c25cbea434d5","Type":"ContainerDied","Data":"e43e95735d4d3c1570528dd8606a707c9ba8a1491ca1e73328bbb9a1152fb593"} Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.870443 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43e95735d4d3c1570528dd8606a707c9ba8a1491ca1e73328bbb9a1152fb593" Feb 14 11:02:38 crc kubenswrapper[4736]: I0214 11:02:38.870520 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b5f-account-create-update-xq5sl" Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.205047 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.205800 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-log" containerID="cri-o://b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2" gracePeriod=30 Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.205940 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-httpd" containerID="cri-o://d7be487436ca3c68565c3c3b81b579e039fd40f7de365c45fc15c582f05ef6fd" gracePeriod=30 Feb 14 11:02:39 crc kubenswrapper[4736]: E0214 11:02:39.446031 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2e07c6_983a_4e5f_8389_ed2de539ee33.slice/crio-b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2e07c6_983a_4e5f_8389_ed2de539ee33.slice/crio-conmon-b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2.scope\": RecentStats: unable to find data in memory cache]" Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.890868 4736 generic.go:334] "Generic (PLEG): container finished" podID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerID="59fe292ef949872f6b06cf85ed7bb72db8b4e9c599d63fb6c70aa86f92eaa601" exitCode=0 Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.891108 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerDied","Data":"59fe292ef949872f6b06cf85ed7bb72db8b4e9c599d63fb6c70aa86f92eaa601"} Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.894668 4736 generic.go:334] "Generic (PLEG): container finished" podID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerID="b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2" exitCode=143 Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.894884 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerDied","Data":"b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2"} Feb 14 11:02:39 crc kubenswrapper[4736]: I0214 11:02:39.993606 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.066028 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.066965 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fp9x\" (UniqueName: \"kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067030 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067072 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067103 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067159 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067196 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.067266 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts\") pod \"6ae4ad0a-4038-4a87-943f-c3794df836c6\" (UID: \"6ae4ad0a-4038-4a87-943f-c3794df836c6\") " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.070161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs" (OuterVolumeSpecName: "logs") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.070412 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.073408 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.074842 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts" (OuterVolumeSpecName: "scripts") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.074970 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x" (OuterVolumeSpecName: "kube-api-access-9fp9x") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "kube-api-access-9fp9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.171049 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.171084 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.171097 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae4ad0a-4038-4a87-943f-c3794df836c6-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.171109 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.171120 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fp9x\" (UniqueName: \"kubernetes.io/projected/6ae4ad0a-4038-4a87-943f-c3794df836c6-kube-api-access-9fp9x\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.178583 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.179858 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data" (OuterVolumeSpecName: "config-data") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.196889 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6ae4ad0a-4038-4a87-943f-c3794df836c6" (UID: "6ae4ad0a-4038-4a87-943f-c3794df836c6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.200450 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.271958 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.272284 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.273308 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.273936 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.274032 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.274110 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae4ad0a-4038-4a87-943f-c3794df836c6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.274280 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.457952 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.458244 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.458952 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.920764 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.920811 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ae4ad0a-4038-4a87-943f-c3794df836c6","Type":"ContainerDied","Data":"dec3e88f1ffd1b043596fe85d3df8d54bc091d65440d632df5d8e5e8d2a8c702"} Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.920841 4736 scope.go:117] "RemoveContainer" containerID="59fe292ef949872f6b06cf85ed7bb72db8b4e9c599d63fb6c70aa86f92eaa601" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.947889 4736 scope.go:117] "RemoveContainer" containerID="286886288e8293a9f9f716288452cc220414a81c7aec0336bf085ca9099af496" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.950566 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.961130 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985383 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985734 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cd324a-4e67-4132-ad44-0991435b9291" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985764 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cd324a-4e67-4132-ad44-0991435b9291" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985783 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd379fd3-2737-47ba-9f0f-59b46e24fed6" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985790 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd379fd3-2737-47ba-9f0f-59b46e24fed6" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985831 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985838 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985853 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1294507-4b94-4f46-91b3-7a3dffdd7494" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985860 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1294507-4b94-4f46-91b3-7a3dffdd7494" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985879 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391df09-ebfc-4e13-85b4-5aab4c8eefb0" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985884 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391df09-ebfc-4e13-85b4-5aab4c8eefb0" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985899 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7910764c-65b4-4645-9d60-c25cbea434d5" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985905 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7910764c-65b4-4645-9d60-c25cbea434d5" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985911 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-httpd" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985916 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-httpd" Feb 14 11:02:40 crc kubenswrapper[4736]: E0214 11:02:40.985935 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-log" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.985940 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-log" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986096 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-log" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986107 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986114 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6391df09-ebfc-4e13-85b4-5aab4c8eefb0" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986514 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1294507-4b94-4f46-91b3-7a3dffdd7494" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986525 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cd324a-4e67-4132-ad44-0991435b9291" containerName="mariadb-database-create" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986541 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd379fd3-2737-47ba-9f0f-59b46e24fed6" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986552 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7910764c-65b4-4645-9d60-c25cbea434d5" containerName="mariadb-account-create-update" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.986564 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" containerName="glance-httpd" Feb 14 11:02:40 crc kubenswrapper[4736]: I0214 11:02:40.995507 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.000483 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.000662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.018687 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.091998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.093393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qn6\" (UniqueName: \"kubernetes.io/projected/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-kube-api-access-v5qn6\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.093519 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.093627 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.096974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.097026 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.097063 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.097212 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198716 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198784 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198810 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198832 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198932 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198951 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5qn6\" (UniqueName: \"kubernetes.io/projected/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-kube-api-access-v5qn6\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.198983 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.199849 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.200069 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.200387 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.205041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.207383 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.220399 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.223518 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.232027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5qn6\" (UniqueName: \"kubernetes.io/projected/f0aa2a69-bea9-4934-9b60-209ecd22eb0a-kube-api-access-v5qn6\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.265162 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0aa2a69-bea9-4934-9b60-209ecd22eb0a\") " pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.330390 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.612365 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.612869 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-central-agent" containerID="cri-o://4ca0a8b330420a48dd960b2467481af015af89e5e6928381e0f67bba2a5472fa" gracePeriod=30 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.613262 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="proxy-httpd" containerID="cri-o://c1d10b5d13751ae13a633f6d3764b7dc008eaa65010c73c9f48ceb582a556fbd" gracePeriod=30 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.613344 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-notification-agent" containerID="cri-o://d432e17144570b1c5cf3f57f24ed4af4ddca06b04fc3e4a36b008cc650f3be63" gracePeriod=30 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.613421 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="sg-core" containerID="cri-o://93a2095aa8473768918276273a107366bc441bc774074ab732f674321e6db153" gracePeriod=30 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.918219 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.951680 4736 generic.go:334] "Generic (PLEG): container finished" podID="14d82352-7f04-48c2-aa10-a088c7541213" containerID="c1d10b5d13751ae13a633f6d3764b7dc008eaa65010c73c9f48ceb582a556fbd" exitCode=0 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.952414 4736 generic.go:334] "Generic (PLEG): container finished" podID="14d82352-7f04-48c2-aa10-a088c7541213" containerID="93a2095aa8473768918276273a107366bc441bc774074ab732f674321e6db153" exitCode=2 Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.952439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerDied","Data":"c1d10b5d13751ae13a633f6d3764b7dc008eaa65010c73c9f48ceb582a556fbd"} Feb 14 11:02:41 crc kubenswrapper[4736]: I0214 11:02:41.952464 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerDied","Data":"93a2095aa8473768918276273a107366bc441bc774074ab732f674321e6db153"} Feb 14 11:02:42 crc kubenswrapper[4736]: I0214 11:02:42.426655 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae4ad0a-4038-4a87-943f-c3794df836c6" path="/var/lib/kubelet/pods/6ae4ad0a-4038-4a87-943f-c3794df836c6/volumes" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.016940 4736 generic.go:334] "Generic (PLEG): container finished" podID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerID="d7be487436ca3c68565c3c3b81b579e039fd40f7de365c45fc15c582f05ef6fd" exitCode=0 Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.017409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerDied","Data":"d7be487436ca3c68565c3c3b81b579e039fd40f7de365c45fc15c582f05ef6fd"} Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.026761 4736 generic.go:334] "Generic (PLEG): container finished" podID="14d82352-7f04-48c2-aa10-a088c7541213" containerID="d432e17144570b1c5cf3f57f24ed4af4ddca06b04fc3e4a36b008cc650f3be63" exitCode=0 Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.026838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerDied","Data":"d432e17144570b1c5cf3f57f24ed4af4ddca06b04fc3e4a36b008cc650f3be63"} Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.028797 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f0aa2a69-bea9-4934-9b60-209ecd22eb0a","Type":"ContainerStarted","Data":"690229802b7a62694631d3f264f67378cc847a6a43e4db9f224fe30965c3c7c1"} Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.028837 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f0aa2a69-bea9-4934-9b60-209ecd22eb0a","Type":"ContainerStarted","Data":"144979ba81f49ecd575351639aea6e771da614e175535953d98e2ce6caf9df44"} Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.041079 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247467 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247529 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cktcf\" (UniqueName: \"kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247585 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247606 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247627 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247837 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247861 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.247878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data\") pod \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\" (UID: \"cb2e07c6-983a-4e5f-8389-ed2de539ee33\") " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.270944 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf" (OuterVolumeSpecName: "kube-api-access-cktcf") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "kube-api-access-cktcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.281676 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs" (OuterVolumeSpecName: "logs") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.282209 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.286244 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.288792 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts" (OuterVolumeSpecName: "scripts") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.293852 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7m7bx"] Feb 14 11:02:43 crc kubenswrapper[4736]: E0214 11:02:43.294252 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-httpd" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.294263 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-httpd" Feb 14 11:02:43 crc kubenswrapper[4736]: E0214 11:02:43.294292 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-log" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.294298 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-log" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.294455 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-httpd" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.294483 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" containerName="glance-log" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.295033 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.298598 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.298889 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.299009 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jz6qv" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.336887 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.339559 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7m7bx"] Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352224 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zltw\" (UniqueName: \"kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352333 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352357 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352413 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352431 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352443 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cktcf\" (UniqueName: \"kubernetes.io/projected/cb2e07c6-983a-4e5f-8389-ed2de539ee33-kube-api-access-cktcf\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352451 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352459 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.352467 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2e07c6-983a-4e5f-8389-ed2de539ee33-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.416226 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.454767 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.455275 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.455373 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.455395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zltw\" (UniqueName: \"kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.455490 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.466715 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.479282 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.486947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data" (OuterVolumeSpecName: "config-data") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.503266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.515317 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zltw\" (UniqueName: \"kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw\") pod \"nova-cell0-conductor-db-sync-7m7bx\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.515465 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cb2e07c6-983a-4e5f-8389-ed2de539ee33" (UID: "cb2e07c6-983a-4e5f-8389-ed2de539ee33"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.558115 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.558233 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2e07c6-983a-4e5f-8389-ed2de539ee33-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:43 crc kubenswrapper[4736]: I0214 11:02:43.638777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.039881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f0aa2a69-bea9-4934-9b60-209ecd22eb0a","Type":"ContainerStarted","Data":"da61c2146434066f725530889b020ca14bad8f5af70f8da3c4c780daeaeb068f"} Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.045620 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cb2e07c6-983a-4e5f-8389-ed2de539ee33","Type":"ContainerDied","Data":"24100200c35f919964960605338a664a6a473211349fe1a6efdf29428d98f01f"} Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.045661 4736 scope.go:117] "RemoveContainer" containerID="d7be487436ca3c68565c3c3b81b579e039fd40f7de365c45fc15c582f05ef6fd" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.045783 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.083508 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.083486095 podStartE2EDuration="4.083486095s" podCreationTimestamp="2026-02-14 11:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:44.068281001 +0000 UTC m=+1274.436908379" watchObservedRunningTime="2026-02-14 11:02:44.083486095 +0000 UTC m=+1274.452113463" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.128334 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.128434 4736 scope.go:117] "RemoveContainer" containerID="b73ff731a78dccf794fd3f3ac9ab7859c2aac93f3c6cec5224a2f506180579e2" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.157731 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.199963 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.202554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.209209 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.227526 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 11:02:44 crc kubenswrapper[4736]: W0214 11:02:44.245963 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5abf3335_1f39_43c3_96e4_dd6f9a17c937.slice/crio-4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6 WatchSource:0}: Error finding container 4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6: Status 404 returned error can't find the container with id 4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6 Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.252227 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.260012 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7m7bx"] Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.327998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd56t\" (UniqueName: \"kubernetes.io/projected/31f01831-be73-46fa-815b-bc32d58fb0fd-kube-api-access-sd56t\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328089 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-logs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328103 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328139 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328167 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328187 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.328216 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.407556 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2e07c6-983a-4e5f-8389-ed2de539ee33" path="/var/lib/kubelet/pods/cb2e07c6-983a-4e5f-8389-ed2de539ee33/volumes" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.429891 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430188 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430514 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd56t\" (UniqueName: \"kubernetes.io/projected/31f01831-be73-46fa-815b-bc32d58fb0fd-kube-api-access-sd56t\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430614 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-logs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430698 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430848 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.430964 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.435715 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-logs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.436777 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31f01831-be73-46fa-815b-bc32d58fb0fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.440121 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.441669 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.448310 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.448713 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.457463 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd56t\" (UniqueName: \"kubernetes.io/projected/31f01831-be73-46fa-815b-bc32d58fb0fd-kube-api-access-sd56t\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.457529 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f01831-be73-46fa-815b-bc32d58fb0fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.482095 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"31f01831-be73-46fa-815b-bc32d58fb0fd\") " pod="openstack/glance-default-external-api-0" Feb 14 11:02:44 crc kubenswrapper[4736]: I0214 11:02:44.567192 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 11:02:45 crc kubenswrapper[4736]: I0214 11:02:45.057670 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" event={"ID":"5abf3335-1f39-43c3-96e4-dd6f9a17c937","Type":"ContainerStarted","Data":"4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6"} Feb 14 11:02:45 crc kubenswrapper[4736]: I0214 11:02:45.198930 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 11:02:46 crc kubenswrapper[4736]: I0214 11:02:46.073247 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31f01831-be73-46fa-815b-bc32d58fb0fd","Type":"ContainerStarted","Data":"968f9cb29b42f4a9b94a1b32c516d3ed53a64b66525aea67d5af42bc3643e2b2"} Feb 14 11:02:46 crc kubenswrapper[4736]: I0214 11:02:46.073672 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31f01831-be73-46fa-815b-bc32d58fb0fd","Type":"ContainerStarted","Data":"6af107ab0c43f43b99c9072ace0984094df1c4c8850f29e9d649ae42583733d2"} Feb 14 11:02:47 crc kubenswrapper[4736]: I0214 11:02:47.082363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31f01831-be73-46fa-815b-bc32d58fb0fd","Type":"ContainerStarted","Data":"d924bfe3c3eb74a4f3e6349802e0fd677f0a70591a2bd30103ba0bcb3a118b5d"} Feb 14 11:02:47 crc kubenswrapper[4736]: I0214 11:02:47.108924 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.108886639 podStartE2EDuration="3.108886639s" podCreationTimestamp="2026-02-14 11:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:02:47.104272614 +0000 UTC m=+1277.472899982" watchObservedRunningTime="2026-02-14 11:02:47.108886639 +0000 UTC m=+1277.477514007" Feb 14 11:02:50 crc kubenswrapper[4736]: I0214 11:02:50.273966 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:02:50 crc kubenswrapper[4736]: I0214 11:02:50.436390 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:02:51 crc kubenswrapper[4736]: I0214 11:02:51.331544 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:51 crc kubenswrapper[4736]: I0214 11:02:51.331898 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:51 crc kubenswrapper[4736]: I0214 11:02:51.378548 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:51 crc kubenswrapper[4736]: I0214 11:02:51.429823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:52 crc kubenswrapper[4736]: I0214 11:02:52.133801 4736 generic.go:334] "Generic (PLEG): container finished" podID="14d82352-7f04-48c2-aa10-a088c7541213" containerID="4ca0a8b330420a48dd960b2467481af015af89e5e6928381e0f67bba2a5472fa" exitCode=0 Feb 14 11:02:52 crc kubenswrapper[4736]: I0214 11:02:52.134979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerDied","Data":"4ca0a8b330420a48dd960b2467481af015af89e5e6928381e0f67bba2a5472fa"} Feb 14 11:02:52 crc kubenswrapper[4736]: I0214 11:02:52.135045 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:52 crc kubenswrapper[4736]: I0214 11:02:52.135062 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.147069 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.147338 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.567494 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.567544 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.609334 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 11:02:54 crc kubenswrapper[4736]: I0214 11:02:54.609570 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 11:02:55 crc kubenswrapper[4736]: I0214 11:02:55.154270 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 11:02:55 crc kubenswrapper[4736]: I0214 11:02:55.154646 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 11:02:55 crc kubenswrapper[4736]: I0214 11:02:55.311974 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:55 crc kubenswrapper[4736]: I0214 11:02:55.312315 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:02:55 crc kubenswrapper[4736]: I0214 11:02:55.316242 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.850156 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.965461 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966126 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966167 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966215 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966283 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94r5g\" (UniqueName: \"kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.966441 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle\") pod \"14d82352-7f04-48c2-aa10-a088c7541213\" (UID: \"14d82352-7f04-48c2-aa10-a088c7541213\") " Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.967172 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.967407 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.972670 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g" (OuterVolumeSpecName: "kube-api-access-94r5g") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "kube-api-access-94r5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:02:56 crc kubenswrapper[4736]: I0214 11:02:56.975926 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts" (OuterVolumeSpecName: "scripts") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.060375 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.071948 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.071978 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.071998 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.072006 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d82352-7f04-48c2-aa10-a088c7541213-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.072027 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94r5g\" (UniqueName: \"kubernetes.io/projected/14d82352-7f04-48c2-aa10-a088c7541213-kube-api-access-94r5g\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.084354 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.173342 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.175759 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d82352-7f04-48c2-aa10-a088c7541213","Type":"ContainerDied","Data":"a36ea8bcb3d78671aac2d62c665634f01f171cc1a2a09cfa72e12bac72c7a6e5"} Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.175809 4736 scope.go:117] "RemoveContainer" containerID="c1d10b5d13751ae13a633f6d3764b7dc008eaa65010c73c9f48ceb582a556fbd" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.175944 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.176164 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data" (OuterVolumeSpecName: "config-data") pod "14d82352-7f04-48c2-aa10-a088c7541213" (UID: "14d82352-7f04-48c2-aa10-a088c7541213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.204987 4736 scope.go:117] "RemoveContainer" containerID="93a2095aa8473768918276273a107366bc441bc774074ab732f674321e6db153" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.223606 4736 scope.go:117] "RemoveContainer" containerID="d432e17144570b1c5cf3f57f24ed4af4ddca06b04fc3e4a36b008cc650f3be63" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.245091 4736 scope.go:117] "RemoveContainer" containerID="4ca0a8b330420a48dd960b2467481af015af89e5e6928381e0f67bba2a5472fa" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.275318 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d82352-7f04-48c2-aa10-a088c7541213-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.514287 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.522892 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.545628 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:57 crc kubenswrapper[4736]: E0214 11:02:57.546024 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="sg-core" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546041 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="sg-core" Feb 14 11:02:57 crc kubenswrapper[4736]: E0214 11:02:57.546064 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-central-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546075 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-central-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: E0214 11:02:57.546090 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-notification-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546097 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-notification-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: E0214 11:02:57.546108 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="proxy-httpd" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546113 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="proxy-httpd" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546270 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-notification-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546280 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="ceilometer-central-agent" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546289 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="proxy-httpd" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.546306 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d82352-7f04-48c2-aa10-a088c7541213" containerName="sg-core" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.547799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.554810 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:02:57 crc kubenswrapper[4736]: W0214 11:02:57.555025 4736 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: secrets "ceilometer-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 14 11:02:57 crc kubenswrapper[4736]: E0214 11:02:57.555226 4736 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.576111 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681639 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspzv\" (UniqueName: \"kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681733 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681777 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681838 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.681883 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.783934 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784377 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784525 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.784901 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspzv\" (UniqueName: \"kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.785138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.785178 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.788654 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.789126 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.791414 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:57 crc kubenswrapper[4736]: I0214 11:02:57.802779 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspzv\" (UniqueName: \"kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.186433 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" event={"ID":"5abf3335-1f39-43c3-96e4-dd6f9a17c937","Type":"ContainerStarted","Data":"19d9890c4585ddfa9a0e11a2550a75a73e9a3f63a6d78dd11b2f306ac3bc8196"} Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.203925 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.204044 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.205207 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" podStartSLOduration=2.548565807 podStartE2EDuration="15.205189281s" podCreationTimestamp="2026-02-14 11:02:43 +0000 UTC" firstStartedPulling="2026-02-14 11:02:44.249666815 +0000 UTC m=+1274.618294183" lastFinishedPulling="2026-02-14 11:02:56.906290289 +0000 UTC m=+1287.274917657" observedRunningTime="2026-02-14 11:02:58.202042289 +0000 UTC m=+1288.570669657" watchObservedRunningTime="2026-02-14 11:02:58.205189281 +0000 UTC m=+1288.573816649" Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.208449 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.407665 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d82352-7f04-48c2-aa10-a088c7541213" path="/var/lib/kubelet/pods/14d82352-7f04-48c2-aa10-a088c7541213/volumes" Feb 14 11:02:58 crc kubenswrapper[4736]: E0214 11:02:58.785675 4736 secret.go:188] Couldn't get secret openstack/ceilometer-scripts: failed to sync secret cache: timed out waiting for the condition Feb 14 11:02:58 crc kubenswrapper[4736]: E0214 11:02:58.785837 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts podName:445507ae-4ec8-4300-98b4-9f0ed79941a1 nodeName:}" failed. No retries permitted until 2026-02-14 11:02:59.285811287 +0000 UTC m=+1289.654438655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts") pod "ceilometer-0" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1") : failed to sync secret cache: timed out waiting for the condition Feb 14 11:02:58 crc kubenswrapper[4736]: I0214 11:02:58.821444 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:02:59 crc kubenswrapper[4736]: I0214 11:02:59.311714 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:59 crc kubenswrapper[4736]: I0214 11:02:59.324690 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") pod \"ceilometer-0\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " pod="openstack/ceilometer-0" Feb 14 11:02:59 crc kubenswrapper[4736]: I0214 11:02:59.362712 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:02:59 crc kubenswrapper[4736]: I0214 11:02:59.880384 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.203325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerStarted","Data":"60cddecf7d97272a98c423559ec31805003021f2b0b88cd807d44d00fa19e180"} Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.272795 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.272859 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.273491 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b"} pod="openstack/horizon-54b8d5f54d-bvjc4" containerMessage="Container horizon failed startup probe, will be restarted" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.273522 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" containerID="cri-o://addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b" gracePeriod=30 Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.440220 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.440585 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.441259 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d75c8d7443e295d15b6b896b7f6edfc518815583b7203ab9204c009d97e150d1"} pod="openstack/horizon-78d96c5d8-mfqqp" containerMessage="Container horizon failed startup probe, will be restarted" Feb 14 11:03:00 crc kubenswrapper[4736]: I0214 11:03:00.441292 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" containerID="cri-o://d75c8d7443e295d15b6b896b7f6edfc518815583b7203ab9204c009d97e150d1" gracePeriod=30 Feb 14 11:03:01 crc kubenswrapper[4736]: I0214 11:03:01.213695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerStarted","Data":"d923b35358a9058a0753455785e9b61ebdb4a8da0fd4074460b80b64ac5f773d"} Feb 14 11:03:01 crc kubenswrapper[4736]: I0214 11:03:01.243434 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:02 crc kubenswrapper[4736]: I0214 11:03:02.224892 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerStarted","Data":"167effffcbdfdcc269e94b8eb8c0d39b371275b976fb7478d60f769184e5c158"} Feb 14 11:03:02 crc kubenswrapper[4736]: I0214 11:03:02.225320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerStarted","Data":"20f8d36bf25b43976fa083469d769be72cc00ff4cd07fa24822d6327c7894937"} Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241423 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerStarted","Data":"d4971abe50e27f239465345d6621b860ac1118c84b4c1da8ef728372b0aec94c"} Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241858 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241684 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="proxy-httpd" containerID="cri-o://d4971abe50e27f239465345d6621b860ac1118c84b4c1da8ef728372b0aec94c" gracePeriod=30 Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241869 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="sg-core" containerID="cri-o://167effffcbdfdcc269e94b8eb8c0d39b371275b976fb7478d60f769184e5c158" gracePeriod=30 Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241640 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-central-agent" containerID="cri-o://d923b35358a9058a0753455785e9b61ebdb4a8da0fd4074460b80b64ac5f773d" gracePeriod=30 Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.241724 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-notification-agent" containerID="cri-o://20f8d36bf25b43976fa083469d769be72cc00ff4cd07fa24822d6327c7894937" gracePeriod=30 Feb 14 11:03:04 crc kubenswrapper[4736]: I0214 11:03:04.273637 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.793889229 podStartE2EDuration="7.273616933s" podCreationTimestamp="2026-02-14 11:02:57 +0000 UTC" firstStartedPulling="2026-02-14 11:02:59.909529136 +0000 UTC m=+1290.278156494" lastFinishedPulling="2026-02-14 11:03:03.38925683 +0000 UTC m=+1293.757884198" observedRunningTime="2026-02-14 11:03:04.269385219 +0000 UTC m=+1294.638012597" watchObservedRunningTime="2026-02-14 11:03:04.273616933 +0000 UTC m=+1294.642244301" Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.262953 4736 generic.go:334] "Generic (PLEG): container finished" podID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerID="d4971abe50e27f239465345d6621b860ac1118c84b4c1da8ef728372b0aec94c" exitCode=0 Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.263270 4736 generic.go:334] "Generic (PLEG): container finished" podID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerID="167effffcbdfdcc269e94b8eb8c0d39b371275b976fb7478d60f769184e5c158" exitCode=2 Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.263280 4736 generic.go:334] "Generic (PLEG): container finished" podID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerID="20f8d36bf25b43976fa083469d769be72cc00ff4cd07fa24822d6327c7894937" exitCode=0 Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.263299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerDied","Data":"d4971abe50e27f239465345d6621b860ac1118c84b4c1da8ef728372b0aec94c"} Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.263343 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerDied","Data":"167effffcbdfdcc269e94b8eb8c0d39b371275b976fb7478d60f769184e5c158"} Feb 14 11:03:05 crc kubenswrapper[4736]: I0214 11:03:05.263354 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerDied","Data":"20f8d36bf25b43976fa083469d769be72cc00ff4cd07fa24822d6327c7894937"} Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.321275 4736 generic.go:334] "Generic (PLEG): container finished" podID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerID="d923b35358a9058a0753455785e9b61ebdb4a8da0fd4074460b80b64ac5f773d" exitCode=0 Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.321859 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerDied","Data":"d923b35358a9058a0753455785e9b61ebdb4a8da0fd4074460b80b64ac5f773d"} Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.321884 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"445507ae-4ec8-4300-98b4-9f0ed79941a1","Type":"ContainerDied","Data":"60cddecf7d97272a98c423559ec31805003021f2b0b88cd807d44d00fa19e180"} Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.321895 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60cddecf7d97272a98c423559ec31805003021f2b0b88cd807d44d00fa19e180" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.356793 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.440997 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.441123 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.441153 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.441565 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.441885 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442052 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442087 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442197 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tspzv\" (UniqueName: \"kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv\") pod \"445507ae-4ec8-4300-98b4-9f0ed79941a1\" (UID: \"445507ae-4ec8-4300-98b4-9f0ed79941a1\") " Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442272 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442945 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.442960 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/445507ae-4ec8-4300-98b4-9f0ed79941a1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.450021 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv" (OuterVolumeSpecName: "kube-api-access-tspzv") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "kube-api-access-tspzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.456437 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts" (OuterVolumeSpecName: "scripts") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.475566 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.521884 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.544720 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.544766 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.544778 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tspzv\" (UniqueName: \"kubernetes.io/projected/445507ae-4ec8-4300-98b4-9f0ed79941a1-kube-api-access-tspzv\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.544788 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.559929 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data" (OuterVolumeSpecName: "config-data") pod "445507ae-4ec8-4300-98b4-9f0ed79941a1" (UID: "445507ae-4ec8-4300-98b4-9f0ed79941a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:10 crc kubenswrapper[4736]: I0214 11:03:10.646814 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/445507ae-4ec8-4300-98b4-9f0ed79941a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.331236 4736 generic.go:334] "Generic (PLEG): container finished" podID="5abf3335-1f39-43c3-96e4-dd6f9a17c937" containerID="19d9890c4585ddfa9a0e11a2550a75a73e9a3f63a6d78dd11b2f306ac3bc8196" exitCode=0 Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.331278 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" event={"ID":"5abf3335-1f39-43c3-96e4-dd6f9a17c937","Type":"ContainerDied","Data":"19d9890c4585ddfa9a0e11a2550a75a73e9a3f63a6d78dd11b2f306ac3bc8196"} Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.331506 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.380334 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.388704 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.404496 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:11 crc kubenswrapper[4736]: E0214 11:03:11.405048 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-notification-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405069 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-notification-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: E0214 11:03:11.405085 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="sg-core" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405092 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="sg-core" Feb 14 11:03:11 crc kubenswrapper[4736]: E0214 11:03:11.405105 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="proxy-httpd" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405113 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="proxy-httpd" Feb 14 11:03:11 crc kubenswrapper[4736]: E0214 11:03:11.405147 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-central-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405154 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-central-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405368 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="sg-core" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405391 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-notification-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405408 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="ceilometer-central-agent" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.405418 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" containerName="proxy-httpd" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.407185 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.409591 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.409663 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.428423 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.566781 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.566847 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.566889 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.566932 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.567073 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.567250 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2gt\" (UniqueName: \"kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.567278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668495 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2gt\" (UniqueName: \"kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668543 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668605 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668623 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668651 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668673 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.668703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.669182 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.669231 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.676123 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.676291 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.677189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.679309 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.688393 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2gt\" (UniqueName: \"kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt\") pod \"ceilometer-0\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " pod="openstack/ceilometer-0" Feb 14 11:03:11 crc kubenswrapper[4736]: I0214 11:03:11.725561 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.212422 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.341115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerStarted","Data":"32c1557f899efb006f9f8ed0d0414e62a78653ff48d40bafd6e3336fa4495bdd"} Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.407968 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445507ae-4ec8-4300-98b4-9f0ed79941a1" path="/var/lib/kubelet/pods/445507ae-4ec8-4300-98b4-9f0ed79941a1/volumes" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.757587 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.892485 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle\") pod \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.892555 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data\") pod \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.892628 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zltw\" (UniqueName: \"kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw\") pod \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.892654 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts\") pod \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\" (UID: \"5abf3335-1f39-43c3-96e4-dd6f9a17c937\") " Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.899150 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts" (OuterVolumeSpecName: "scripts") pod "5abf3335-1f39-43c3-96e4-dd6f9a17c937" (UID: "5abf3335-1f39-43c3-96e4-dd6f9a17c937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.900872 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw" (OuterVolumeSpecName: "kube-api-access-4zltw") pod "5abf3335-1f39-43c3-96e4-dd6f9a17c937" (UID: "5abf3335-1f39-43c3-96e4-dd6f9a17c937"). InnerVolumeSpecName "kube-api-access-4zltw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.937023 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data" (OuterVolumeSpecName: "config-data") pod "5abf3335-1f39-43c3-96e4-dd6f9a17c937" (UID: "5abf3335-1f39-43c3-96e4-dd6f9a17c937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.949814 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5abf3335-1f39-43c3-96e4-dd6f9a17c937" (UID: "5abf3335-1f39-43c3-96e4-dd6f9a17c937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.994625 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.994656 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.994667 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zltw\" (UniqueName: \"kubernetes.io/projected/5abf3335-1f39-43c3-96e4-dd6f9a17c937-kube-api-access-4zltw\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:12 crc kubenswrapper[4736]: I0214 11:03:12.994676 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5abf3335-1f39-43c3-96e4-dd6f9a17c937-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.350065 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" event={"ID":"5abf3335-1f39-43c3-96e4-dd6f9a17c937","Type":"ContainerDied","Data":"4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6"} Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.350105 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4902fd44e03b5a836013be6b8872a1e761047bd4707dcf2d5a17a5712a07a5d6" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.350176 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7m7bx" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.363599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerStarted","Data":"f2222a5e60019c8de3edda36b27acd096d01dc0658da5ef797621feb11b3ccab"} Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.534806 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 11:03:13 crc kubenswrapper[4736]: E0214 11:03:13.535595 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abf3335-1f39-43c3-96e4-dd6f9a17c937" containerName="nova-cell0-conductor-db-sync" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.535616 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abf3335-1f39-43c3-96e4-dd6f9a17c937" containerName="nova-cell0-conductor-db-sync" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.535828 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abf3335-1f39-43c3-96e4-dd6f9a17c937" containerName="nova-cell0-conductor-db-sync" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.536407 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.550757 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.564632 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jz6qv" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.564980 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.613284 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.613367 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.613523 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlwg6\" (UniqueName: \"kubernetes.io/projected/2cd086e8-4f54-40fa-9f03-f2434e27ce21-kube-api-access-wlwg6\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.715174 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlwg6\" (UniqueName: \"kubernetes.io/projected/2cd086e8-4f54-40fa-9f03-f2434e27ce21-kube-api-access-wlwg6\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.715279 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.715334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.720367 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.720384 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd086e8-4f54-40fa-9f03-f2434e27ce21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.735543 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlwg6\" (UniqueName: \"kubernetes.io/projected/2cd086e8-4f54-40fa-9f03-f2434e27ce21-kube-api-access-wlwg6\") pod \"nova-cell0-conductor-0\" (UID: \"2cd086e8-4f54-40fa-9f03-f2434e27ce21\") " pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:13 crc kubenswrapper[4736]: I0214 11:03:13.924947 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:14 crc kubenswrapper[4736]: I0214 11:03:14.383308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerStarted","Data":"7c9e8a124f3ee742ef112462960112fb61e196ea6f0cfaef9edb481c0640db09"} Feb 14 11:03:14 crc kubenswrapper[4736]: I0214 11:03:14.452279 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 11:03:14 crc kubenswrapper[4736]: I0214 11:03:14.793981 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:15 crc kubenswrapper[4736]: I0214 11:03:15.412250 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerStarted","Data":"8672497a34a4b38ae9263f4e50394172d823c234d6a4b53c0a4eb6a64ce1a35e"} Feb 14 11:03:15 crc kubenswrapper[4736]: I0214 11:03:15.420003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cd086e8-4f54-40fa-9f03-f2434e27ce21","Type":"ContainerStarted","Data":"99233de27c6a4813953c44456a673c1b05e197214169926229ba8bdbb129ddae"} Feb 14 11:03:15 crc kubenswrapper[4736]: I0214 11:03:15.420044 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cd086e8-4f54-40fa-9f03-f2434e27ce21","Type":"ContainerStarted","Data":"484c02b462ce93aaf8e1d4f61014656d3a89a33b3d0842b642f47f02d3443113"} Feb 14 11:03:15 crc kubenswrapper[4736]: I0214 11:03:15.420879 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.428986 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerStarted","Data":"767436af2ec86d8f399b08ed2190713dea404732b8db8c1f2ba8d785b1dbcd26"} Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.429207 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-central-agent" containerID="cri-o://f2222a5e60019c8de3edda36b27acd096d01dc0658da5ef797621feb11b3ccab" gracePeriod=30 Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.429231 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="sg-core" containerID="cri-o://8672497a34a4b38ae9263f4e50394172d823c234d6a4b53c0a4eb6a64ce1a35e" gracePeriod=30 Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.429225 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="proxy-httpd" containerID="cri-o://767436af2ec86d8f399b08ed2190713dea404732b8db8c1f2ba8d785b1dbcd26" gracePeriod=30 Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.429258 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-notification-agent" containerID="cri-o://7c9e8a124f3ee742ef112462960112fb61e196ea6f0cfaef9edb481c0640db09" gracePeriod=30 Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.462639 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1364712949999998 podStartE2EDuration="5.462618922s" podCreationTimestamp="2026-02-14 11:03:11 +0000 UTC" firstStartedPulling="2026-02-14 11:03:12.211368122 +0000 UTC m=+1302.579995490" lastFinishedPulling="2026-02-14 11:03:15.537515749 +0000 UTC m=+1305.906143117" observedRunningTime="2026-02-14 11:03:16.455733452 +0000 UTC m=+1306.824360820" watchObservedRunningTime="2026-02-14 11:03:16.462618922 +0000 UTC m=+1306.831246280" Feb 14 11:03:16 crc kubenswrapper[4736]: I0214 11:03:16.463806 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.463731205 podStartE2EDuration="3.463731205s" podCreationTimestamp="2026-02-14 11:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:15.437115523 +0000 UTC m=+1305.805742891" watchObservedRunningTime="2026-02-14 11:03:16.463731205 +0000 UTC m=+1306.832358583" Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.439132 4736 generic.go:334] "Generic (PLEG): container finished" podID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerID="767436af2ec86d8f399b08ed2190713dea404732b8db8c1f2ba8d785b1dbcd26" exitCode=0 Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.439440 4736 generic.go:334] "Generic (PLEG): container finished" podID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerID="8672497a34a4b38ae9263f4e50394172d823c234d6a4b53c0a4eb6a64ce1a35e" exitCode=2 Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.439451 4736 generic.go:334] "Generic (PLEG): container finished" podID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerID="7c9e8a124f3ee742ef112462960112fb61e196ea6f0cfaef9edb481c0640db09" exitCode=0 Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.439285 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerDied","Data":"767436af2ec86d8f399b08ed2190713dea404732b8db8c1f2ba8d785b1dbcd26"} Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.440153 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerDied","Data":"8672497a34a4b38ae9263f4e50394172d823c234d6a4b53c0a4eb6a64ce1a35e"} Feb 14 11:03:17 crc kubenswrapper[4736]: I0214 11:03:17.440167 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerDied","Data":"7c9e8a124f3ee742ef112462960112fb61e196ea6f0cfaef9edb481c0640db09"} Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.480981 4736 generic.go:334] "Generic (PLEG): container finished" podID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerID="f2222a5e60019c8de3edda36b27acd096d01dc0658da5ef797621feb11b3ccab" exitCode=0 Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.481014 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerDied","Data":"f2222a5e60019c8de3edda36b27acd096d01dc0658da5ef797621feb11b3ccab"} Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.481700 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ffb44a3-6794-47b1-9418-f6c076a9577b","Type":"ContainerDied","Data":"32c1557f899efb006f9f8ed0d0414e62a78653ff48d40bafd6e3336fa4495bdd"} Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.481722 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c1557f899efb006f9f8ed0d0414e62a78653ff48d40bafd6e3336fa4495bdd" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.486468 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.596800 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.596862 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.596905 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.596931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.596964 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.597029 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px2gt\" (UniqueName: \"kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.597059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data\") pod \"0ffb44a3-6794-47b1-9418-f6c076a9577b\" (UID: \"0ffb44a3-6794-47b1-9418-f6c076a9577b\") " Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.598144 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.598390 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.602801 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt" (OuterVolumeSpecName: "kube-api-access-px2gt") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "kube-api-access-px2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.605110 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts" (OuterVolumeSpecName: "scripts") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.648986 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.699108 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.699172 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.699187 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.699201 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ffb44a3-6794-47b1-9418-f6c076a9577b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.699212 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px2gt\" (UniqueName: \"kubernetes.io/projected/0ffb44a3-6794-47b1-9418-f6c076a9577b-kube-api-access-px2gt\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.702430 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.714920 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data" (OuterVolumeSpecName: "config-data") pod "0ffb44a3-6794-47b1-9418-f6c076a9577b" (UID: "0ffb44a3-6794-47b1-9418-f6c076a9577b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.801296 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:21 crc kubenswrapper[4736]: I0214 11:03:21.801507 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffb44a3-6794-47b1-9418-f6c076a9577b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.489628 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.527195 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.544014 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.557870 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:22 crc kubenswrapper[4736]: E0214 11:03:22.558279 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="proxy-httpd" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558294 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="proxy-httpd" Feb 14 11:03:22 crc kubenswrapper[4736]: E0214 11:03:22.558305 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-notification-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558311 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-notification-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: E0214 11:03:22.558327 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-central-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558333 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-central-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: E0214 11:03:22.558344 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="sg-core" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558350 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="sg-core" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558504 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-notification-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558517 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="sg-core" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558524 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="proxy-httpd" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.558543 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" containerName="ceilometer-central-agent" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.560220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.565075 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.566160 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.604156 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.717955 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.718314 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.718450 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.718693 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.718847 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.719010 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2c6\" (UniqueName: \"kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.719221 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820475 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820527 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2c6\" (UniqueName: \"kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820607 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820681 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820725 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.820765 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.821095 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.821520 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.827682 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.827764 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.827731 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.840511 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2c6\" (UniqueName: \"kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.849001 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " pod="openstack/ceilometer-0" Feb 14 11:03:22 crc kubenswrapper[4736]: I0214 11:03:22.917333 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:03:23 crc kubenswrapper[4736]: I0214 11:03:23.413232 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:23 crc kubenswrapper[4736]: I0214 11:03:23.422559 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:03:23 crc kubenswrapper[4736]: I0214 11:03:23.498610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerStarted","Data":"9ef267a1e985d6b2e69f3deedd9fed44331a83b80cb8019f98d9aef05954fbf9"} Feb 14 11:03:23 crc kubenswrapper[4736]: I0214 11:03:23.984658 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.408154 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffb44a3-6794-47b1-9418-f6c076a9577b" path="/var/lib/kubelet/pods/0ffb44a3-6794-47b1-9418-f6c076a9577b/volumes" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.508970 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerStarted","Data":"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2"} Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.688405 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lgxl6"] Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.689828 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.695280 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.725573 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.771805 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lgxl6"] Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.871045 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m578j\" (UniqueName: \"kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.871198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.871273 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.871341 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.910032 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.916038 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.919391 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.921897 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.973227 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.973302 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.973348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m578j\" (UniqueName: \"kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.973412 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.979551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.986409 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:24 crc kubenswrapper[4736]: I0214 11:03:24.995336 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.011498 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.014046 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.021438 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m578j\" (UniqueName: \"kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j\") pod \"nova-cell0-cell-mapping-lgxl6\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.024391 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.054770 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.055905 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.057456 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.071665 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.075278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.075499 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4v7\" (UniqueName: \"kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.075577 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.079495 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.130310 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.176983 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177038 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177070 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177114 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4v7\" (UniqueName: \"kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177139 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsg8\" (UniqueName: \"kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177163 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177256 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtr4\" (UniqueName: \"kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177296 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.177428 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.229480 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.230145 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.235336 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4v7\" (UniqueName: \"kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7\") pod \"nova-scheduler-0\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.242637 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.278959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279014 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279061 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279160 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsg8\" (UniqueName: \"kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279225 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtr4\" (UniqueName: \"kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279250 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.279608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.290867 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.292289 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.298025 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.306967 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.308615 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.313269 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.332053 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.332432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsg8\" (UniqueName: \"kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8\") pod \"nova-cell1-novncproxy-0\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.354147 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.380543 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.382173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.382399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkrq\" (UniqueName: \"kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.382493 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.411261 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtr4\" (UniqueName: \"kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4\") pod \"nova-api-0\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.448002 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.459413 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.464535 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.471886 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483494 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9z6w\" (UniqueName: \"kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483541 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkrq\" (UniqueName: \"kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483563 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483589 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483635 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483664 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483682 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483731 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.483813 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.504948 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.507511 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.511809 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.512338 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkrq\" (UniqueName: \"kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.525882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.563817 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerStarted","Data":"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f"} Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9z6w\" (UniqueName: \"kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585426 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585490 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585551 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.585574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.586569 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.586585 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.587437 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.587687 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.588166 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.610342 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9z6w\" (UniqueName: \"kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w\") pod \"dnsmasq-dns-757b4f8459-dt4b7\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.636895 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.793257 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:25 crc kubenswrapper[4736]: I0214 11:03:25.938634 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lgxl6"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.198347 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.257339 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.460899 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.584887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerStarted","Data":"829081f4a733f41ae95ac6d42bb57ae977c7a9f0e714de92a1b18445e25831ed"} Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.593464 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d7d2aab-a806-4967-a2b1-401ade4dbb6e","Type":"ContainerStarted","Data":"afab2991814634aa0bf3ff67ee4ffc1bd640e9fc19da9ca5acd78a7781607d3c"} Feb 14 11:03:26 crc kubenswrapper[4736]: W0214 11:03:26.593589 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9775a431_cbfd_4afb_9c1e_054dc5c20c7f.slice/crio-47bfa531ab6b0d1a8fecf5feb2eacd837057bb35b0365ceaec4f5a43f48921d2 WatchSource:0}: Error finding container 47bfa531ab6b0d1a8fecf5feb2eacd837057bb35b0365ceaec4f5a43f48921d2: Status 404 returned error can't find the container with id 47bfa531ab6b0d1a8fecf5feb2eacd837057bb35b0365ceaec4f5a43f48921d2 Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.598295 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.601874 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerStarted","Data":"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d"} Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.612995 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75255565-4d85-4d9a-917b-e4d9edd33154","Type":"ContainerStarted","Data":"ca7ba831b838fe537e1fef921646e0e396ae27f98633800bc4d6fa88c8cee215"} Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.645846 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lgxl6" event={"ID":"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5","Type":"ContainerStarted","Data":"baff73b7359a735febd8471b1d7cdd096cdc1d642d1e6de5aac154ac28b30abf"} Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.645887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lgxl6" event={"ID":"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5","Type":"ContainerStarted","Data":"cc30d22054bc8af613593a79ac82d5c19ef0b5adb6999e0802281f04fd985782"} Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.666447 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lgxl6" podStartSLOduration=2.666427489 podStartE2EDuration="2.666427489s" podCreationTimestamp="2026-02-14 11:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:26.663922136 +0000 UTC m=+1317.032549504" watchObservedRunningTime="2026-02-14 11:03:26.666427489 +0000 UTC m=+1317.035054857" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.736255 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.893660 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nsn99"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.904496 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.907841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nsn99"] Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.910909 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.916636 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.966002 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.966084 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.966133 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:26 crc kubenswrapper[4736]: I0214 11:03:26.966178 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzllp\" (UniqueName: \"kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.068334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.068429 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.069579 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzllp\" (UniqueName: \"kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.069795 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.076060 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.076848 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.088302 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.091888 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzllp\" (UniqueName: \"kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp\") pod \"nova-cell1-conductor-db-sync-nsn99\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.287711 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.671978 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerID="ede6ec94651d7b96ba34a78a1a504dd567416ba016f4bdf9816ce14ae6433068" exitCode=0 Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.672269 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" event={"ID":"e0d4ecd4-8be3-4c23-ae88-f93464271353","Type":"ContainerDied","Data":"ede6ec94651d7b96ba34a78a1a504dd567416ba016f4bdf9816ce14ae6433068"} Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.672292 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" event={"ID":"e0d4ecd4-8be3-4c23-ae88-f93464271353","Type":"ContainerStarted","Data":"8ed96a6a30354bc4ef31bd4e8da6b853c7a7d5267482f12383080b8800f852a8"} Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.682681 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerStarted","Data":"47bfa531ab6b0d1a8fecf5feb2eacd837057bb35b0365ceaec4f5a43f48921d2"} Feb 14 11:03:27 crc kubenswrapper[4736]: I0214 11:03:27.831043 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nsn99"] Feb 14 11:03:27 crc kubenswrapper[4736]: W0214 11:03:27.839227 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b151935_66d0_44a9_b6bb_4760eb23e60f.slice/crio-a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f WatchSource:0}: Error finding container a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f: Status 404 returned error can't find the container with id a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.696192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerStarted","Data":"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc"} Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.697390 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.712204 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" event={"ID":"e0d4ecd4-8be3-4c23-ae88-f93464271353","Type":"ContainerStarted","Data":"6d6f0fa7f10557fdf58e635daf79ff9ac56ecb1568a2e1ef7528b22d1c357285"} Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.712333 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.730241 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nsn99" event={"ID":"5b151935-66d0-44a9-b6bb-4760eb23e60f","Type":"ContainerStarted","Data":"7238a41ae74c3f788c852e4c07f0c8a0125cf92871fc5b195a73eec2a5e3b45d"} Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.730292 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nsn99" event={"ID":"5b151935-66d0-44a9-b6bb-4760eb23e60f","Type":"ContainerStarted","Data":"a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f"} Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.789434 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6414862770000003 podStartE2EDuration="6.789409197s" podCreationTimestamp="2026-02-14 11:03:22 +0000 UTC" firstStartedPulling="2026-02-14 11:03:23.422375876 +0000 UTC m=+1313.791003244" lastFinishedPulling="2026-02-14 11:03:27.570298796 +0000 UTC m=+1317.938926164" observedRunningTime="2026-02-14 11:03:28.766470461 +0000 UTC m=+1319.135097839" watchObservedRunningTime="2026-02-14 11:03:28.789409197 +0000 UTC m=+1319.158036575" Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.828807 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nsn99" podStartSLOduration=2.828788221 podStartE2EDuration="2.828788221s" podCreationTimestamp="2026-02-14 11:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:28.826044391 +0000 UTC m=+1319.194671759" watchObservedRunningTime="2026-02-14 11:03:28.828788221 +0000 UTC m=+1319.197415589" Feb 14 11:03:28 crc kubenswrapper[4736]: I0214 11:03:28.858902 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" podStartSLOduration=3.858870984 podStartE2EDuration="3.858870984s" podCreationTimestamp="2026-02-14 11:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:28.854651652 +0000 UTC m=+1319.223279020" watchObservedRunningTime="2026-02-14 11:03:28.858870984 +0000 UTC m=+1319.227498352" Feb 14 11:03:29 crc kubenswrapper[4736]: I0214 11:03:29.155328 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:29 crc kubenswrapper[4736]: I0214 11:03:29.175998 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:03:30 crc kubenswrapper[4736]: I0214 11:03:30.751292 4736 generic.go:334] "Generic (PLEG): container finished" podID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerID="d75c8d7443e295d15b6b896b7f6edfc518815583b7203ab9204c009d97e150d1" exitCode=137 Feb 14 11:03:30 crc kubenswrapper[4736]: I0214 11:03:30.751779 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerDied","Data":"d75c8d7443e295d15b6b896b7f6edfc518815583b7203ab9204c009d97e150d1"} Feb 14 11:03:30 crc kubenswrapper[4736]: I0214 11:03:30.752599 4736 scope.go:117] "RemoveContainer" containerID="04fd8fab3519745e093dbed42df83c22c60787a9527db958728640db4965d92b" Feb 14 11:03:30 crc kubenswrapper[4736]: I0214 11:03:30.756058 4736 generic.go:334] "Generic (PLEG): container finished" podID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerID="addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b" exitCode=137 Feb 14 11:03:30 crc kubenswrapper[4736]: I0214 11:03:30.756954 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerDied","Data":"addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b"} Feb 14 11:03:31 crc kubenswrapper[4736]: I0214 11:03:31.201504 4736 scope.go:117] "RemoveContainer" containerID="e9afa700f170b4aa20f9303e305f513dc88cc3df4f06793ac247cb0b4ca2f8ad" Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.785172 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerStarted","Data":"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.787886 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d7d2aab-a806-4967-a2b1-401ade4dbb6e","Type":"ContainerStarted","Data":"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.791969 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerStarted","Data":"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.792000 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerStarted","Data":"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.792087 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-log" containerID="cri-o://b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" gracePeriod=30 Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.792176 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-metadata" containerID="cri-o://9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" gracePeriod=30 Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.799608 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75255565-4d85-4d9a-917b-e4d9edd33154","Type":"ContainerStarted","Data":"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.809836 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="75255565-4d85-4d9a-917b-e4d9edd33154" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498" gracePeriod=30 Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.822854 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerStarted","Data":"fb0c45048275e0bf2651dccdb02220defccb3291b76f1afce7f1795ba3045ba7"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.822898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerStarted","Data":"0c1fec158fac884ca387ba1c6702598adfaecccf8e08716c21401b55ac79d44b"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.828348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d96c5d8-mfqqp" event={"ID":"bd003c66-fc46-445a-a88a-23a7c17f9747","Type":"ContainerStarted","Data":"31a5a5543e8ff2bfc40181d389312596f43e83db42989e83ce2e1b7c7d94cd99"} Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.829671 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.098134355 podStartE2EDuration="7.829652301s" podCreationTimestamp="2026-02-14 11:03:25 +0000 UTC" firstStartedPulling="2026-02-14 11:03:26.469091739 +0000 UTC m=+1316.837719107" lastFinishedPulling="2026-02-14 11:03:31.200609685 +0000 UTC m=+1321.569237053" observedRunningTime="2026-02-14 11:03:32.82306355 +0000 UTC m=+1323.191690918" watchObservedRunningTime="2026-02-14 11:03:32.829652301 +0000 UTC m=+1323.198279669" Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.853568 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.164091559 podStartE2EDuration="7.853549665s" podCreationTimestamp="2026-02-14 11:03:25 +0000 UTC" firstStartedPulling="2026-02-14 11:03:26.604249393 +0000 UTC m=+1316.972876761" lastFinishedPulling="2026-02-14 11:03:31.293707499 +0000 UTC m=+1321.662334867" observedRunningTime="2026-02-14 11:03:32.839788086 +0000 UTC m=+1323.208415454" watchObservedRunningTime="2026-02-14 11:03:32.853549665 +0000 UTC m=+1323.222177033" Feb 14 11:03:32 crc kubenswrapper[4736]: I0214 11:03:32.860435 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.854988034 podStartE2EDuration="8.860417965s" podCreationTimestamp="2026-02-14 11:03:24 +0000 UTC" firstStartedPulling="2026-02-14 11:03:26.27008976 +0000 UTC m=+1316.638717128" lastFinishedPulling="2026-02-14 11:03:31.275519691 +0000 UTC m=+1321.644147059" observedRunningTime="2026-02-14 11:03:32.852901386 +0000 UTC m=+1323.221528764" watchObservedRunningTime="2026-02-14 11:03:32.860417965 +0000 UTC m=+1323.229045333" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.485364 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.506477 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.477908412 podStartE2EDuration="9.506457765s" podCreationTimestamp="2026-02-14 11:03:24 +0000 UTC" firstStartedPulling="2026-02-14 11:03:26.243045414 +0000 UTC m=+1316.611672782" lastFinishedPulling="2026-02-14 11:03:31.271594777 +0000 UTC m=+1321.640222135" observedRunningTime="2026-02-14 11:03:32.899992754 +0000 UTC m=+1323.268620132" watchObservedRunningTime="2026-02-14 11:03:33.506457765 +0000 UTC m=+1323.875085123" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.674225 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs\") pod \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.674660 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkrq\" (UniqueName: \"kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq\") pod \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.674588 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs" (OuterVolumeSpecName: "logs") pod "9775a431-cbfd-4afb-9c1e-054dc5c20c7f" (UID: "9775a431-cbfd-4afb-9c1e-054dc5c20c7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.675803 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle\") pod \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.676313 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data\") pod \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\" (UID: \"9775a431-cbfd-4afb-9c1e-054dc5c20c7f\") " Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.677383 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.687103 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq" (OuterVolumeSpecName: "kube-api-access-jdkrq") pod "9775a431-cbfd-4afb-9c1e-054dc5c20c7f" (UID: "9775a431-cbfd-4afb-9c1e-054dc5c20c7f"). InnerVolumeSpecName "kube-api-access-jdkrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.794399 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkrq\" (UniqueName: \"kubernetes.io/projected/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-kube-api-access-jdkrq\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.801336 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data" (OuterVolumeSpecName: "config-data") pod "9775a431-cbfd-4afb-9c1e-054dc5c20c7f" (UID: "9775a431-cbfd-4afb-9c1e-054dc5c20c7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.812588 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9775a431-cbfd-4afb-9c1e-054dc5c20c7f" (UID: "9775a431-cbfd-4afb-9c1e-054dc5c20c7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841114 4736 generic.go:334] "Generic (PLEG): container finished" podID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerID="9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" exitCode=0 Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841150 4736 generic.go:334] "Generic (PLEG): container finished" podID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerID="b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" exitCode=143 Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841194 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841258 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerDied","Data":"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae"} Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841284 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerDied","Data":"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe"} Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9775a431-cbfd-4afb-9c1e-054dc5c20c7f","Type":"ContainerDied","Data":"47bfa531ab6b0d1a8fecf5feb2eacd837057bb35b0365ceaec4f5a43f48921d2"} Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.841308 4736 scope.go:117] "RemoveContainer" containerID="9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.903115 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.903157 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9775a431-cbfd-4afb-9c1e-054dc5c20c7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.957998 4736 scope.go:117] "RemoveContainer" containerID="b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.980124 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.986796 4736 scope.go:117] "RemoveContainer" containerID="9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" Feb 14 11:03:33 crc kubenswrapper[4736]: E0214 11:03:33.993879 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae\": container with ID starting with 9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae not found: ID does not exist" containerID="9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.993927 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae"} err="failed to get container status \"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae\": rpc error: code = NotFound desc = could not find container \"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae\": container with ID starting with 9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae not found: ID does not exist" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.993952 4736 scope.go:117] "RemoveContainer" containerID="b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.994037 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:33 crc kubenswrapper[4736]: E0214 11:03:33.995098 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe\": container with ID starting with b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe not found: ID does not exist" containerID="b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.995116 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe"} err="failed to get container status \"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe\": rpc error: code = NotFound desc = could not find container \"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe\": container with ID starting with b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe not found: ID does not exist" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.995130 4736 scope.go:117] "RemoveContainer" containerID="9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.999894 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae"} err="failed to get container status \"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae\": rpc error: code = NotFound desc = could not find container \"9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae\": container with ID starting with 9612fc5d0f21119a549b7bf0ac34612e0859792ecfc1fda929fbb885da7193ae not found: ID does not exist" Feb 14 11:03:33 crc kubenswrapper[4736]: I0214 11:03:33.999936 4736 scope.go:117] "RemoveContainer" containerID="b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.003814 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:34 crc kubenswrapper[4736]: E0214 11:03:34.004207 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-metadata" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.004222 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-metadata" Feb 14 11:03:34 crc kubenswrapper[4736]: E0214 11:03:34.004267 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-log" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.004273 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-log" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.004436 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-metadata" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.004455 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" containerName="nova-metadata-log" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.004977 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe"} err="failed to get container status \"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe\": rpc error: code = NotFound desc = could not find container \"b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe\": container with ID starting with b6122ce5662e4a19c6629d5075fb1e70e59d35dc5e31640bfcb05e606ec343fe not found: ID does not exist" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.005361 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.009063 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.009270 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.042820 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.106241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.106473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.106520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hjn\" (UniqueName: \"kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.106552 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.106602 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.208044 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.208110 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.208333 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.208669 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.208785 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hjn\" (UniqueName: \"kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.209091 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.212254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.216355 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.222293 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.409137 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9775a431-cbfd-4afb-9c1e-054dc5c20c7f" path="/var/lib/kubelet/pods/9775a431-cbfd-4afb-9c1e-054dc5c20c7f/volumes" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.489900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hjn\" (UniqueName: \"kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn\") pod \"nova-metadata-0\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " pod="openstack/nova-metadata-0" Feb 14 11:03:34 crc kubenswrapper[4736]: I0214 11:03:34.621847 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.110841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.243332 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.243836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.281618 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.460909 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.461128 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.472876 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.795870 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.885600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerStarted","Data":"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772"} Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.885651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerStarted","Data":"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250"} Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.885668 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerStarted","Data":"2e0c8a588a925bf6a7415fd86b17653a5161e21caa1d24939fec11ca4410cb6b"} Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.894161 4736 generic.go:334] "Generic (PLEG): container finished" podID="bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" containerID="baff73b7359a735febd8471b1d7cdd096cdc1d642d1e6de5aac154ac28b30abf" exitCode=0 Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.895123 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lgxl6" event={"ID":"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5","Type":"ContainerDied","Data":"baff73b7359a735febd8471b1d7cdd096cdc1d642d1e6de5aac154ac28b30abf"} Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.912910 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.913199 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="dnsmasq-dns" containerID="cri-o://5b14e621278328e3b0f7d188c69fe4737f4603c8ee4a047cc270322b9f431c20" gracePeriod=10 Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.925300 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9252764239999998 podStartE2EDuration="2.925276424s" podCreationTimestamp="2026-02-14 11:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:35.907469037 +0000 UTC m=+1326.276096425" watchObservedRunningTime="2026-02-14 11:03:35.925276424 +0000 UTC m=+1326.293903792" Feb 14 11:03:35 crc kubenswrapper[4736]: I0214 11:03:35.954794 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 11:03:36 crc kubenswrapper[4736]: I0214 11:03:36.542950 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:36 crc kubenswrapper[4736]: I0214 11:03:36.543179 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:36 crc kubenswrapper[4736]: I0214 11:03:36.907631 4736 generic.go:334] "Generic (PLEG): container finished" podID="ca482914-1fef-4b08-a3c6-5b1418426443" containerID="5b14e621278328e3b0f7d188c69fe4737f4603c8ee4a047cc270322b9f431c20" exitCode=0 Feb 14 11:03:36 crc kubenswrapper[4736]: I0214 11:03:36.907786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" event={"ID":"ca482914-1fef-4b08-a3c6-5b1418426443","Type":"ContainerDied","Data":"5b14e621278328e3b0f7d188c69fe4737f4603c8ee4a047cc270322b9f431c20"} Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.157514 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311347 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311593 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311697 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7xq\" (UniqueName: \"kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311724 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311777 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.311813 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb\") pod \"ca482914-1fef-4b08-a3c6-5b1418426443\" (UID: \"ca482914-1fef-4b08-a3c6-5b1418426443\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.342470 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq" (OuterVolumeSpecName: "kube-api-access-qz7xq") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "kube-api-access-qz7xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.408420 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.417725 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7xq\" (UniqueName: \"kubernetes.io/projected/ca482914-1fef-4b08-a3c6-5b1418426443-kube-api-access-qz7xq\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.417765 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.420717 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.421524 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.428382 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.451257 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config" (OuterVolumeSpecName: "config") pod "ca482914-1fef-4b08-a3c6-5b1418426443" (UID: "ca482914-1fef-4b08-a3c6-5b1418426443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.510656 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.519274 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.519303 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.519312 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.519320 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca482914-1fef-4b08-a3c6-5b1418426443-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.619927 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m578j\" (UniqueName: \"kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j\") pod \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.619988 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle\") pod \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.620314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts\") pod \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.620357 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data\") pod \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\" (UID: \"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5\") " Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.657678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" (UID: "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.657885 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data" (OuterVolumeSpecName: "config-data") pod "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" (UID: "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.658243 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts" (OuterVolumeSpecName: "scripts") pod "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" (UID: "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.659034 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j" (OuterVolumeSpecName: "kube-api-access-m578j") pod "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" (UID: "bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5"). InnerVolumeSpecName "kube-api-access-m578j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.722869 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.722916 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.722931 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m578j\" (UniqueName: \"kubernetes.io/projected/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-kube-api-access-m578j\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.722945 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.918005 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lgxl6" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.918000 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lgxl6" event={"ID":"bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5","Type":"ContainerDied","Data":"cc30d22054bc8af613593a79ac82d5c19ef0b5adb6999e0802281f04fd985782"} Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.918125 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc30d22054bc8af613593a79ac82d5c19ef0b5adb6999e0802281f04fd985782" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.921116 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" event={"ID":"ca482914-1fef-4b08-a3c6-5b1418426443","Type":"ContainerDied","Data":"eaa4573d09bd4d038e2381dfb221aa2b910eb85499ae46f02665184dd6a39c2d"} Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.921177 4736 scope.go:117] "RemoveContainer" containerID="5b14e621278328e3b0f7d188c69fe4737f4603c8ee4a047cc270322b9f431c20" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.921190 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-wqmzf" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.948088 4736 scope.go:117] "RemoveContainer" containerID="bdf158c3edd14e4da655d8c8f8f560d911e75c574f29a31e9a15e3a673357c73" Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.967835 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:03:37 crc kubenswrapper[4736]: I0214 11:03:37.976722 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-wqmzf"] Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.164630 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.164867 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-log" containerID="cri-o://0c1fec158fac884ca387ba1c6702598adfaecccf8e08716c21401b55ac79d44b" gracePeriod=30 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.164934 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-api" containerID="cri-o://fb0c45048275e0bf2651dccdb02220defccb3291b76f1afce7f1795ba3045ba7" gracePeriod=30 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.181296 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.181504 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerName="nova-scheduler-scheduler" containerID="cri-o://46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" gracePeriod=30 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.192979 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.193200 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-log" containerID="cri-o://67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" gracePeriod=30 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.193388 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-metadata" containerID="cri-o://092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" gracePeriod=30 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.415530 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" path="/var/lib/kubelet/pods/ca482914-1fef-4b08-a3c6-5b1418426443/volumes" Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.748971 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.929771 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerID="0c1fec158fac884ca387ba1c6702598adfaecccf8e08716c21401b55ac79d44b" exitCode=143 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.929845 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerDied","Data":"0c1fec158fac884ca387ba1c6702598adfaecccf8e08716c21401b55ac79d44b"} Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935330 4736 generic.go:334] "Generic (PLEG): container finished" podID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerID="092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" exitCode=0 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935357 4736 generic.go:334] "Generic (PLEG): container finished" podID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerID="67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" exitCode=143 Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935375 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerDied","Data":"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772"} Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935398 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerDied","Data":"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250"} Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbe81ee1-e055-4c6b-b706-a650437d9b98","Type":"ContainerDied","Data":"2e0c8a588a925bf6a7415fd86b17653a5161e21caa1d24939fec11ca4410cb6b"} Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935421 4736 scope.go:117] "RemoveContainer" containerID="092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.935517 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.943201 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hjn\" (UniqueName: \"kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn\") pod \"cbe81ee1-e055-4c6b-b706-a650437d9b98\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.943372 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs\") pod \"cbe81ee1-e055-4c6b-b706-a650437d9b98\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.943430 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data\") pod \"cbe81ee1-e055-4c6b-b706-a650437d9b98\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.943482 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs\") pod \"cbe81ee1-e055-4c6b-b706-a650437d9b98\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.943518 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle\") pod \"cbe81ee1-e055-4c6b-b706-a650437d9b98\" (UID: \"cbe81ee1-e055-4c6b-b706-a650437d9b98\") " Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.944143 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs" (OuterVolumeSpecName: "logs") pod "cbe81ee1-e055-4c6b-b706-a650437d9b98" (UID: "cbe81ee1-e055-4c6b-b706-a650437d9b98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:38 crc kubenswrapper[4736]: I0214 11:03:38.969334 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn" (OuterVolumeSpecName: "kube-api-access-42hjn") pod "cbe81ee1-e055-4c6b-b706-a650437d9b98" (UID: "cbe81ee1-e055-4c6b-b706-a650437d9b98"). InnerVolumeSpecName "kube-api-access-42hjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.001079 4736 scope.go:117] "RemoveContainer" containerID="67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.005243 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data" (OuterVolumeSpecName: "config-data") pod "cbe81ee1-e055-4c6b-b706-a650437d9b98" (UID: "cbe81ee1-e055-4c6b-b706-a650437d9b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.017713 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe81ee1-e055-4c6b-b706-a650437d9b98" (UID: "cbe81ee1-e055-4c6b-b706-a650437d9b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.045894 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe81ee1-e055-4c6b-b706-a650437d9b98-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.045925 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.045965 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.045975 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hjn\" (UniqueName: \"kubernetes.io/projected/cbe81ee1-e055-4c6b-b706-a650437d9b98-kube-api-access-42hjn\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.056073 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cbe81ee1-e055-4c6b-b706-a650437d9b98" (UID: "cbe81ee1-e055-4c6b-b706-a650437d9b98"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.147714 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe81ee1-e055-4c6b-b706-a650437d9b98-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.177700 4736 scope.go:117] "RemoveContainer" containerID="092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.178126 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772\": container with ID starting with 092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772 not found: ID does not exist" containerID="092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178166 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772"} err="failed to get container status \"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772\": rpc error: code = NotFound desc = could not find container \"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772\": container with ID starting with 092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772 not found: ID does not exist" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178192 4736 scope.go:117] "RemoveContainer" containerID="67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.178516 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250\": container with ID starting with 67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250 not found: ID does not exist" containerID="67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178536 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250"} err="failed to get container status \"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250\": rpc error: code = NotFound desc = could not find container \"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250\": container with ID starting with 67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250 not found: ID does not exist" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178549 4736 scope.go:117] "RemoveContainer" containerID="092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178826 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772"} err="failed to get container status \"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772\": rpc error: code = NotFound desc = could not find container \"092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772\": container with ID starting with 092b4960ee92d66bd3b06687755d70320f525ea5e14efcabe29bf4ecaa461772 not found: ID does not exist" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.178855 4736 scope.go:117] "RemoveContainer" containerID="67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.179147 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250"} err="failed to get container status \"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250\": rpc error: code = NotFound desc = could not find container \"67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250\": container with ID starting with 67c59385fc5fe419deddc05ad6671972da42e1e3c12a90bcfb8c1a798671b250 not found: ID does not exist" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.272502 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.282761 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.294813 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.295261 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="dnsmasq-dns" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295282 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="dnsmasq-dns" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.295297 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-metadata" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295305 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-metadata" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.295328 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" containerName="nova-manage" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295335 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" containerName="nova-manage" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.295366 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="init" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295374 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="init" Feb 14 11:03:39 crc kubenswrapper[4736]: E0214 11:03:39.295385 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-log" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295393 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-log" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295595 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-metadata" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295687 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" containerName="nova-metadata-log" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295704 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca482914-1fef-4b08-a3c6-5b1418426443" containerName="dnsmasq-dns" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.295716 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" containerName="nova-manage" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.296616 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.302144 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.303200 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.308833 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.453185 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.453377 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.453532 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.453673 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.453713 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fp7z\" (UniqueName: \"kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.555808 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.556535 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.556651 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.557625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.557686 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fp7z\" (UniqueName: \"kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.561999 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.562991 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.566426 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.574462 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.584123 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fp7z\" (UniqueName: \"kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z\") pod \"nova-metadata-0\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.612644 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.951555 4736 generic.go:334] "Generic (PLEG): container finished" podID="5b151935-66d0-44a9-b6bb-4760eb23e60f" containerID="7238a41ae74c3f788c852e4c07f0c8a0125cf92871fc5b195a73eec2a5e3b45d" exitCode=0 Feb 14 11:03:39 crc kubenswrapper[4736]: I0214 11:03:39.951968 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nsn99" event={"ID":"5b151935-66d0-44a9-b6bb-4760eb23e60f","Type":"ContainerDied","Data":"7238a41ae74c3f788c852e4c07f0c8a0125cf92871fc5b195a73eec2a5e3b45d"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.101227 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:03:40 crc kubenswrapper[4736]: E0214 11:03:40.245366 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 is running failed: container process not found" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 11:03:40 crc kubenswrapper[4736]: E0214 11:03:40.246064 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 is running failed: container process not found" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 11:03:40 crc kubenswrapper[4736]: E0214 11:03:40.246311 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 is running failed: container process not found" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 11:03:40 crc kubenswrapper[4736]: E0214 11:03:40.246343 4736 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerName="nova-scheduler-scheduler" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.272529 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.272610 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.428280 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe81ee1-e055-4c6b-b706-a650437d9b98" path="/var/lib/kubelet/pods/cbe81ee1-e055-4c6b-b706-a650437d9b98/volumes" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.435558 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.435941 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.594043 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.778162 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4v7\" (UniqueName: \"kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7\") pod \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.778577 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle\") pod \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.779010 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data\") pod \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\" (UID: \"3d7d2aab-a806-4967-a2b1-401ade4dbb6e\") " Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.782225 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7" (OuterVolumeSpecName: "kube-api-access-fk4v7") pod "3d7d2aab-a806-4967-a2b1-401ade4dbb6e" (UID: "3d7d2aab-a806-4967-a2b1-401ade4dbb6e"). InnerVolumeSpecName "kube-api-access-fk4v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.806294 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d7d2aab-a806-4967-a2b1-401ade4dbb6e" (UID: "3d7d2aab-a806-4967-a2b1-401ade4dbb6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.815771 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data" (OuterVolumeSpecName: "config-data") pod "3d7d2aab-a806-4967-a2b1-401ade4dbb6e" (UID: "3d7d2aab-a806-4967-a2b1-401ade4dbb6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.881374 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.881617 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.881683 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4v7\" (UniqueName: \"kubernetes.io/projected/3d7d2aab-a806-4967-a2b1-401ade4dbb6e-kube-api-access-fk4v7\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.966336 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerStarted","Data":"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.966393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerStarted","Data":"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.966407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerStarted","Data":"e7781c7d462b2db51b636765963d41e1bac5b7392d98fef6dbacb9bb710f2cc0"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.968375 4736 generic.go:334] "Generic (PLEG): container finished" podID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" exitCode=0 Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.968443 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d7d2aab-a806-4967-a2b1-401ade4dbb6e","Type":"ContainerDied","Data":"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.968470 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d7d2aab-a806-4967-a2b1-401ade4dbb6e","Type":"ContainerDied","Data":"afab2991814634aa0bf3ff67ee4ffc1bd640e9fc19da9ca5acd78a7781607d3c"} Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.968488 4736 scope.go:117] "RemoveContainer" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.969407 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:40 crc kubenswrapper[4736]: I0214 11:03:40.995960 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.99593646 podStartE2EDuration="1.99593646s" podCreationTimestamp="2026-02-14 11:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:40.989670078 +0000 UTC m=+1331.358297446" watchObservedRunningTime="2026-02-14 11:03:40.99593646 +0000 UTC m=+1331.364563828" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.003317 4736 scope.go:117] "RemoveContainer" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" Feb 14 11:03:41 crc kubenswrapper[4736]: E0214 11:03:41.003916 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6\": container with ID starting with 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 not found: ID does not exist" containerID="46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.004031 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6"} err="failed to get container status \"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6\": rpc error: code = NotFound desc = could not find container \"46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6\": container with ID starting with 46aee2674fd8026fccb12c6d95d3cc3b088f2c2f8dfc3b739057768e4ccb3ee6 not found: ID does not exist" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.068382 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.078490 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.098538 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:41 crc kubenswrapper[4736]: E0214 11:03:41.098920 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerName="nova-scheduler-scheduler" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.098932 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerName="nova-scheduler-scheduler" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.099174 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" containerName="nova-scheduler-scheduler" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.099708 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.106498 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.117144 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.188586 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmxm\" (UniqueName: \"kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.188680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.188900 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.291167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmxm\" (UniqueName: \"kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.291302 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.291334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.299099 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.300061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.313965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmxm\" (UniqueName: \"kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm\") pod \"nova-scheduler-0\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.397496 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.470295 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.494909 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data\") pod \"5b151935-66d0-44a9-b6bb-4760eb23e60f\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.495103 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzllp\" (UniqueName: \"kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp\") pod \"5b151935-66d0-44a9-b6bb-4760eb23e60f\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.495231 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle\") pod \"5b151935-66d0-44a9-b6bb-4760eb23e60f\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.495263 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts\") pod \"5b151935-66d0-44a9-b6bb-4760eb23e60f\" (UID: \"5b151935-66d0-44a9-b6bb-4760eb23e60f\") " Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.499249 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts" (OuterVolumeSpecName: "scripts") pod "5b151935-66d0-44a9-b6bb-4760eb23e60f" (UID: "5b151935-66d0-44a9-b6bb-4760eb23e60f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.500194 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp" (OuterVolumeSpecName: "kube-api-access-zzllp") pod "5b151935-66d0-44a9-b6bb-4760eb23e60f" (UID: "5b151935-66d0-44a9-b6bb-4760eb23e60f"). InnerVolumeSpecName "kube-api-access-zzllp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.551879 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data" (OuterVolumeSpecName: "config-data") pod "5b151935-66d0-44a9-b6bb-4760eb23e60f" (UID: "5b151935-66d0-44a9-b6bb-4760eb23e60f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.554959 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b151935-66d0-44a9-b6bb-4760eb23e60f" (UID: "5b151935-66d0-44a9-b6bb-4760eb23e60f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.610778 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzllp\" (UniqueName: \"kubernetes.io/projected/5b151935-66d0-44a9-b6bb-4760eb23e60f-kube-api-access-zzllp\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.611170 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.611180 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.611189 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b151935-66d0-44a9-b6bb-4760eb23e60f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.900023 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.980568 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a532c856-f3f7-4d38-a310-83a3df9bcae6","Type":"ContainerStarted","Data":"ab19fa8315c17f8328f27bdd9f096891069dd536c72c72a561a0cd10a2ea871b"} Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.983919 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nsn99" event={"ID":"5b151935-66d0-44a9-b6bb-4760eb23e60f","Type":"ContainerDied","Data":"a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f"} Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.983967 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d0c94d19bca3d5140c83bde2ba9de9ab8139f4f093633e55231d88b1e04f1f" Feb 14 11:03:41 crc kubenswrapper[4736]: I0214 11:03:41.983931 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nsn99" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.073400 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 11:03:42 crc kubenswrapper[4736]: E0214 11:03:42.074102 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b151935-66d0-44a9-b6bb-4760eb23e60f" containerName="nova-cell1-conductor-db-sync" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.074119 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b151935-66d0-44a9-b6bb-4760eb23e60f" containerName="nova-cell1-conductor-db-sync" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.074324 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b151935-66d0-44a9-b6bb-4760eb23e60f" containerName="nova-cell1-conductor-db-sync" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.074896 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.081053 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.098611 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.123419 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.123533 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.123647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsfw\" (UniqueName: \"kubernetes.io/projected/22a7ad59-032e-457c-84ee-a3145f286106-kube-api-access-6gsfw\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.224854 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsfw\" (UniqueName: \"kubernetes.io/projected/22a7ad59-032e-457c-84ee-a3145f286106-kube-api-access-6gsfw\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.225057 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.225132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.231422 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.238450 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a7ad59-032e-457c-84ee-a3145f286106-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.243220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsfw\" (UniqueName: \"kubernetes.io/projected/22a7ad59-032e-457c-84ee-a3145f286106-kube-api-access-6gsfw\") pod \"nova-cell1-conductor-0\" (UID: \"22a7ad59-032e-457c-84ee-a3145f286106\") " pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.408974 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7d2aab-a806-4967-a2b1-401ade4dbb6e" path="/var/lib/kubelet/pods/3d7d2aab-a806-4967-a2b1-401ade4dbb6e/volumes" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.422900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.963881 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.998271 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerID="fb0c45048275e0bf2651dccdb02220defccb3291b76f1afce7f1795ba3045ba7" exitCode=0 Feb 14 11:03:42 crc kubenswrapper[4736]: I0214 11:03:42.998360 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerDied","Data":"fb0c45048275e0bf2651dccdb02220defccb3291b76f1afce7f1795ba3045ba7"} Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.002484 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22a7ad59-032e-457c-84ee-a3145f286106","Type":"ContainerStarted","Data":"cae58fd468c406647eef9f1df3c14c0736ce3b6d3c2aa135f9f1bbdd9590a86c"} Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.004446 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a532c856-f3f7-4d38-a310-83a3df9bcae6","Type":"ContainerStarted","Data":"98c1879ddfcbe3b47f5d4dafb1399178cedcd119ea55e855971af8a90d9c271b"} Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.026304 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.026284139 podStartE2EDuration="2.026284139s" podCreationTimestamp="2026-02-14 11:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:43.026065252 +0000 UTC m=+1333.394692620" watchObservedRunningTime="2026-02-14 11:03:43.026284139 +0000 UTC m=+1333.394911507" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.118277 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.239675 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data\") pod \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.239792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle\") pod \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.239853 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs\") pod \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.239920 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vtr4\" (UniqueName: \"kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4\") pod \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\" (UID: \"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9\") " Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.240324 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs" (OuterVolumeSpecName: "logs") pod "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" (UID: "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.240814 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.244673 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4" (OuterVolumeSpecName: "kube-api-access-6vtr4") pod "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" (UID: "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9"). InnerVolumeSpecName "kube-api-access-6vtr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.267320 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" (UID: "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.277163 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data" (OuterVolumeSpecName: "config-data") pod "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" (UID: "b8d7453c-9c58-4a9e-b5cb-a5febabe82a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.342492 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.342528 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vtr4\" (UniqueName: \"kubernetes.io/projected/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-kube-api-access-6vtr4\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:43 crc kubenswrapper[4736]: I0214 11:03:43.342543 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.014313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22a7ad59-032e-457c-84ee-a3145f286106","Type":"ContainerStarted","Data":"86112d79a83f98acf14a10755c3dad5af5c230e958057704756145c9adb8d237"} Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.014656 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.016007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8d7453c-9c58-4a9e-b5cb-a5febabe82a9","Type":"ContainerDied","Data":"829081f4a733f41ae95ac6d42bb57ae977c7a9f0e714de92a1b18445e25831ed"} Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.016029 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.016049 4736 scope.go:117] "RemoveContainer" containerID="fb0c45048275e0bf2651dccdb02220defccb3291b76f1afce7f1795ba3045ba7" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.078499 4736 scope.go:117] "RemoveContainer" containerID="0c1fec158fac884ca387ba1c6702598adfaecccf8e08716c21401b55ac79d44b" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.114515 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.114491749 podStartE2EDuration="2.114491749s" podCreationTimestamp="2026-02-14 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:44.032875199 +0000 UTC m=+1334.401502567" watchObservedRunningTime="2026-02-14 11:03:44.114491749 +0000 UTC m=+1334.483119117" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.127209 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.165826 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.173138 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:44 crc kubenswrapper[4736]: E0214 11:03:44.173533 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-api" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.173552 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-api" Feb 14 11:03:44 crc kubenswrapper[4736]: E0214 11:03:44.173577 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-log" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.173583 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-log" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.173761 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-log" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.173781 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" containerName="nova-api-api" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.175010 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.177507 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.181194 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.283196 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frz47\" (UniqueName: \"kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.283319 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.283341 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.283493 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.385897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.386545 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.386835 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frz47\" (UniqueName: \"kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.387412 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.388350 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.394838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.395011 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.409539 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frz47\" (UniqueName: \"kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47\") pod \"nova-api-0\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.423356 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d7453c-9c58-4a9e-b5cb-a5febabe82a9" path="/var/lib/kubelet/pods/b8d7453c-9c58-4a9e-b5cb-a5febabe82a9/volumes" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.493371 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.614192 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.615119 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 11:03:44 crc kubenswrapper[4736]: I0214 11:03:44.977072 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:03:45 crc kubenswrapper[4736]: I0214 11:03:45.028963 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerStarted","Data":"a550a5b7ad9b5d576341771c78c69c77bbefaefb99a41de2173017d061de9c45"} Feb 14 11:03:46 crc kubenswrapper[4736]: I0214 11:03:46.042054 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerStarted","Data":"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4"} Feb 14 11:03:46 crc kubenswrapper[4736]: I0214 11:03:46.043896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerStarted","Data":"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23"} Feb 14 11:03:46 crc kubenswrapper[4736]: I0214 11:03:46.080544 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.08052161 podStartE2EDuration="2.08052161s" podCreationTimestamp="2026-02-14 11:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:03:46.059038476 +0000 UTC m=+1336.427665854" watchObservedRunningTime="2026-02-14 11:03:46.08052161 +0000 UTC m=+1336.449148988" Feb 14 11:03:46 crc kubenswrapper[4736]: I0214 11:03:46.470826 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 11:03:49 crc kubenswrapper[4736]: I0214 11:03:49.613898 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 11:03:49 crc kubenswrapper[4736]: I0214 11:03:49.614317 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 11:03:50 crc kubenswrapper[4736]: I0214 11:03:50.274266 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:03:50 crc kubenswrapper[4736]: I0214 11:03:50.437571 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d96c5d8-mfqqp" podUID="bd003c66-fc46-445a-a88a-23a7c17f9747" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 14 11:03:50 crc kubenswrapper[4736]: I0214 11:03:50.626064 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:50 crc kubenswrapper[4736]: I0214 11:03:50.626093 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:51 crc kubenswrapper[4736]: I0214 11:03:51.471248 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 11:03:51 crc kubenswrapper[4736]: I0214 11:03:51.506434 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 11:03:52 crc kubenswrapper[4736]: I0214 11:03:52.163109 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 11:03:52 crc kubenswrapper[4736]: I0214 11:03:52.456552 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 14 11:03:52 crc kubenswrapper[4736]: I0214 11:03:52.929010 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 11:03:54 crc kubenswrapper[4736]: I0214 11:03:54.494467 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:03:54 crc kubenswrapper[4736]: I0214 11:03:54.494872 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:03:55 crc kubenswrapper[4736]: I0214 11:03:55.575932 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:55 crc kubenswrapper[4736]: I0214 11:03:55.575953 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:03:56 crc kubenswrapper[4736]: I0214 11:03:56.804054 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:56 crc kubenswrapper[4736]: I0214 11:03:56.804637 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerName="kube-state-metrics" containerID="cri-o://6c62a90297343352e1252a91e5d5e40032f0917d1f8225eb84c6dd635b60a3d4" gracePeriod=30 Feb 14 11:03:56 crc kubenswrapper[4736]: I0214 11:03:56.982721 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.186854 4736 generic.go:334] "Generic (PLEG): container finished" podID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerID="6c62a90297343352e1252a91e5d5e40032f0917d1f8225eb84c6dd635b60a3d4" exitCode=2 Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.186951 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6871c749-b4c2-4a16-8322-aa4384a1b86b","Type":"ContainerDied","Data":"6c62a90297343352e1252a91e5d5e40032f0917d1f8225eb84c6dd635b60a3d4"} Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.310895 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.369445 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5l8f\" (UniqueName: \"kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f\") pod \"6871c749-b4c2-4a16-8322-aa4384a1b86b\" (UID: \"6871c749-b4c2-4a16-8322-aa4384a1b86b\") " Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.374842 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f" (OuterVolumeSpecName: "kube-api-access-x5l8f") pod "6871c749-b4c2-4a16-8322-aa4384a1b86b" (UID: "6871c749-b4c2-4a16-8322-aa4384a1b86b"). InnerVolumeSpecName "kube-api-access-x5l8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:03:57 crc kubenswrapper[4736]: I0214 11:03:57.471596 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5l8f\" (UniqueName: \"kubernetes.io/projected/6871c749-b4c2-4a16-8322-aa4384a1b86b-kube-api-access-x5l8f\") on node \"crc\" DevicePath \"\"" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.199357 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6871c749-b4c2-4a16-8322-aa4384a1b86b","Type":"ContainerDied","Data":"eea932f5779db3d5ede22bccc19211d06babed83ad990737b1a70736716ab211"} Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.199432 4736 scope.go:117] "RemoveContainer" containerID="6c62a90297343352e1252a91e5d5e40032f0917d1f8225eb84c6dd635b60a3d4" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.199650 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.241765 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.249372 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.274291 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:58 crc kubenswrapper[4736]: E0214 11:03:58.274720 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerName="kube-state-metrics" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.274737 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerName="kube-state-metrics" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.275070 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" containerName="kube-state-metrics" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.275690 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.277545 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.277889 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.288167 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.389315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.389387 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqggk\" (UniqueName: \"kubernetes.io/projected/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-api-access-hqggk\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.389476 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.389493 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.410636 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6871c749-b4c2-4a16-8322-aa4384a1b86b" path="/var/lib/kubelet/pods/6871c749-b4c2-4a16-8322-aa4384a1b86b/volumes" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.491694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqggk\" (UniqueName: \"kubernetes.io/projected/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-api-access-hqggk\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.492637 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.492810 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.493029 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.498430 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.498475 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.506834 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60c0b91-8564-472a-b4a7-8bab9a773d39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.512150 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqggk\" (UniqueName: \"kubernetes.io/projected/a60c0b91-8564-472a-b4a7-8bab9a773d39-kube-api-access-hqggk\") pod \"kube-state-metrics-0\" (UID: \"a60c0b91-8564-472a-b4a7-8bab9a773d39\") " pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.588731 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.662572 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.662956 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-central-agent" containerID="cri-o://315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2" gracePeriod=30 Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.663407 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="proxy-httpd" containerID="cri-o://4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc" gracePeriod=30 Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.663516 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="sg-core" containerID="cri-o://d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d" gracePeriod=30 Feb 14 11:03:58 crc kubenswrapper[4736]: I0214 11:03:58.663567 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-notification-agent" containerID="cri-o://a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f" gracePeriod=30 Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.087468 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 11:03:59 crc kubenswrapper[4736]: W0214 11:03:59.091659 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda60c0b91_8564_472a_b4a7_8bab9a773d39.slice/crio-320f6f530efcd6f67c2c8d2ef788c96626588dbcf3147642c6f924034c3dcedb WatchSource:0}: Error finding container 320f6f530efcd6f67c2c8d2ef788c96626588dbcf3147642c6f924034c3dcedb: Status 404 returned error can't find the container with id 320f6f530efcd6f67c2c8d2ef788c96626588dbcf3147642c6f924034c3dcedb Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.209906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a60c0b91-8564-472a-b4a7-8bab9a773d39","Type":"ContainerStarted","Data":"320f6f530efcd6f67c2c8d2ef788c96626588dbcf3147642c6f924034c3dcedb"} Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215791 4736 generic.go:334] "Generic (PLEG): container finished" podID="227132b1-e84d-44fd-8991-9161edfd4f15" containerID="4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc" exitCode=0 Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215817 4736 generic.go:334] "Generic (PLEG): container finished" podID="227132b1-e84d-44fd-8991-9161edfd4f15" containerID="d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d" exitCode=2 Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215826 4736 generic.go:334] "Generic (PLEG): container finished" podID="227132b1-e84d-44fd-8991-9161edfd4f15" containerID="315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2" exitCode=0 Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215836 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerDied","Data":"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc"} Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerDied","Data":"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d"} Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.215908 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerDied","Data":"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2"} Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.656668 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.657067 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.663220 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 11:03:59 crc kubenswrapper[4736]: I0214 11:03:59.663297 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.237399 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a60c0b91-8564-472a-b4a7-8bab9a773d39","Type":"ContainerStarted","Data":"7ec6b3f48650b6b30e7b056aa545bfa5577e6cb4399032476b8770d043db4f92"} Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.237507 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.260265 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.709611063 podStartE2EDuration="2.260247572s" podCreationTimestamp="2026-02-14 11:03:58 +0000 UTC" firstStartedPulling="2026-02-14 11:03:59.094258953 +0000 UTC m=+1349.462886311" lastFinishedPulling="2026-02-14 11:03:59.644895452 +0000 UTC m=+1350.013522820" observedRunningTime="2026-02-14 11:04:00.253099644 +0000 UTC m=+1350.621727012" watchObservedRunningTime="2026-02-14 11:04:00.260247572 +0000 UTC m=+1350.628874940" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.719993 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860362 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx2c6\" (UniqueName: \"kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860419 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860480 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860504 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860600 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860617 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860649 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml\") pod \"227132b1-e84d-44fd-8991-9161edfd4f15\" (UID: \"227132b1-e84d-44fd-8991-9161edfd4f15\") " Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.860881 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.861134 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.861526 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.861549 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/227132b1-e84d-44fd-8991-9161edfd4f15-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.885775 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6" (OuterVolumeSpecName: "kube-api-access-wx2c6") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "kube-api-access-wx2c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.892908 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts" (OuterVolumeSpecName: "scripts") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.900669 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.963766 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.963791 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx2c6\" (UniqueName: \"kubernetes.io/projected/227132b1-e84d-44fd-8991-9161edfd4f15-kube-api-access-wx2c6\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.963802 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.970705 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:00 crc kubenswrapper[4736]: I0214 11:04:00.998893 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data" (OuterVolumeSpecName: "config-data") pod "227132b1-e84d-44fd-8991-9161edfd4f15" (UID: "227132b1-e84d-44fd-8991-9161edfd4f15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.064998 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.065031 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227132b1-e84d-44fd-8991-9161edfd4f15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.248447 4736 generic.go:334] "Generic (PLEG): container finished" podID="227132b1-e84d-44fd-8991-9161edfd4f15" containerID="a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f" exitCode=0 Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.248495 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerDied","Data":"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f"} Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.248887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"227132b1-e84d-44fd-8991-9161edfd4f15","Type":"ContainerDied","Data":"9ef267a1e985d6b2e69f3deedd9fed44331a83b80cb8019f98d9aef05954fbf9"} Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.248917 4736 scope.go:117] "RemoveContainer" containerID="4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.248510 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.277186 4736 scope.go:117] "RemoveContainer" containerID="d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.307140 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.315926 4736 scope.go:117] "RemoveContainer" containerID="a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.323280 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.337210 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.337941 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="sg-core" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.338087 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="sg-core" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.338197 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-central-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.338269 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-central-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.338354 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-notification-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.338432 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-notification-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.338527 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="proxy-httpd" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.338603 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="proxy-httpd" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.338916 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-central-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.339017 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="proxy-httpd" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.339148 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="ceilometer-notification-agent" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.339226 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" containerName="sg-core" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.341486 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.344736 4736 scope.go:117] "RemoveContainer" containerID="315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.351556 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.379606 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.379791 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.382130 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.407925 4736 scope.go:117] "RemoveContainer" containerID="4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.408397 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc\": container with ID starting with 4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc not found: ID does not exist" containerID="4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.408501 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc"} err="failed to get container status \"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc\": rpc error: code = NotFound desc = could not find container \"4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc\": container with ID starting with 4f5b33d8bd55beabfb52a70930298328cb0494107d4e11e9fad1fe21b12228dc not found: ID does not exist" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.408584 4736 scope.go:117] "RemoveContainer" containerID="d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.408958 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d\": container with ID starting with d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d not found: ID does not exist" containerID="d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.409082 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d"} err="failed to get container status \"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d\": rpc error: code = NotFound desc = could not find container \"d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d\": container with ID starting with d4b7e2b3117d48278334299caaef1d40507823cd9a3607ad5a9327dd9200678d not found: ID does not exist" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.409229 4736 scope.go:117] "RemoveContainer" containerID="a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.409614 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f\": container with ID starting with a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f not found: ID does not exist" containerID="a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.409681 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f"} err="failed to get container status \"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f\": rpc error: code = NotFound desc = could not find container \"a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f\": container with ID starting with a62a9507811611288ffc0df5ba699a4181500c2c6ae53d5fe8f8c29b6be70c6f not found: ID does not exist" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.409708 4736 scope.go:117] "RemoveContainer" containerID="315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.410149 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2\": container with ID starting with 315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2 not found: ID does not exist" containerID="315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.410173 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2"} err="failed to get container status \"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2\": rpc error: code = NotFound desc = could not find container \"315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2\": container with ID starting with 315a5f8c16ba924d2e4f834ad3e2d8830de4cda31ae0d799d114d1bacb831dd2 not found: ID does not exist" Feb 14 11:04:01 crc kubenswrapper[4736]: E0214 11:04:01.455249 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod227132b1_e84d_44fd_8991_9161edfd4f15.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod227132b1_e84d_44fd_8991_9161edfd4f15.slice/crio-9ef267a1e985d6b2e69f3deedd9fed44331a83b80cb8019f98d9aef05954fbf9\": RecentStats: unable to find data in memory cache]" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.474803 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.474859 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.474884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.474956 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.475010 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskhh\" (UniqueName: \"kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.475858 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.475915 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.475948 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.578260 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.578534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.578973 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.579171 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.579297 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lskhh\" (UniqueName: \"kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.579683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.579733 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.580117 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.580131 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.580521 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.583085 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.583461 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.584061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.584419 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.590055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.597652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskhh\" (UniqueName: \"kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh\") pod \"ceilometer-0\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " pod="openstack/ceilometer-0" Feb 14 11:04:01 crc kubenswrapper[4736]: I0214 11:04:01.709785 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:02 crc kubenswrapper[4736]: I0214 11:04:02.257886 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:02 crc kubenswrapper[4736]: I0214 11:04:02.287347 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerStarted","Data":"a4df440b15732adfe6fcfabac6e9b2ebf5f623fc83d89feec5492800be13f654"} Feb 14 11:04:02 crc kubenswrapper[4736]: I0214 11:04:02.409189 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227132b1-e84d-44fd-8991-9161edfd4f15" path="/var/lib/kubelet/pods/227132b1-e84d-44fd-8991-9161edfd4f15/volumes" Feb 14 11:04:02 crc kubenswrapper[4736]: I0214 11:04:02.765944 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:04:02 crc kubenswrapper[4736]: I0214 11:04:02.795543 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.180113 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.296837 4736 generic.go:334] "Generic (PLEG): container finished" podID="75255565-4d85-4d9a-917b-e4d9edd33154" containerID="37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498" exitCode=137 Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.296875 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75255565-4d85-4d9a-917b-e4d9edd33154","Type":"ContainerDied","Data":"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498"} Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.296907 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75255565-4d85-4d9a-917b-e4d9edd33154","Type":"ContainerDied","Data":"ca7ba831b838fe537e1fef921646e0e396ae27f98633800bc4d6fa88c8cee215"} Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.296942 4736 scope.go:117] "RemoveContainer" containerID="37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.297079 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.298559 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerStarted","Data":"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b"} Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.314459 4736 scope.go:117] "RemoveContainer" containerID="37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498" Feb 14 11:04:03 crc kubenswrapper[4736]: E0214 11:04:03.314920 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498\": container with ID starting with 37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498 not found: ID does not exist" containerID="37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.314949 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498"} err="failed to get container status \"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498\": rpc error: code = NotFound desc = could not find container \"37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498\": container with ID starting with 37dab4eaeb1bd5b36f5b8e9c7cd037b8631f7ab0fff1d3b17b2223db3b284498 not found: ID does not exist" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.326886 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle\") pod \"75255565-4d85-4d9a-917b-e4d9edd33154\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.326999 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blsg8\" (UniqueName: \"kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8\") pod \"75255565-4d85-4d9a-917b-e4d9edd33154\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.327098 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data\") pod \"75255565-4d85-4d9a-917b-e4d9edd33154\" (UID: \"75255565-4d85-4d9a-917b-e4d9edd33154\") " Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.333039 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8" (OuterVolumeSpecName: "kube-api-access-blsg8") pod "75255565-4d85-4d9a-917b-e4d9edd33154" (UID: "75255565-4d85-4d9a-917b-e4d9edd33154"). InnerVolumeSpecName "kube-api-access-blsg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.364952 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data" (OuterVolumeSpecName: "config-data") pod "75255565-4d85-4d9a-917b-e4d9edd33154" (UID: "75255565-4d85-4d9a-917b-e4d9edd33154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.369831 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75255565-4d85-4d9a-917b-e4d9edd33154" (UID: "75255565-4d85-4d9a-917b-e4d9edd33154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.429020 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blsg8\" (UniqueName: \"kubernetes.io/projected/75255565-4d85-4d9a-917b-e4d9edd33154-kube-api-access-blsg8\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.429058 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.429072 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75255565-4d85-4d9a-917b-e4d9edd33154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.639303 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.657338 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.664044 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:04:03 crc kubenswrapper[4736]: E0214 11:04:03.664538 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75255565-4d85-4d9a-917b-e4d9edd33154" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.664556 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="75255565-4d85-4d9a-917b-e4d9edd33154" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.664755 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="75255565-4d85-4d9a-917b-e4d9edd33154" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.665431 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.667466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.667968 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.668103 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.676196 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.736690 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.736784 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.736821 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.736849 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5vx\" (UniqueName: \"kubernetes.io/projected/56c90a64-d883-4865-a393-2a9aec8e43a8-kube-api-access-hc5vx\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.736919 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.838161 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.838223 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.838259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.838289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc5vx\" (UniqueName: \"kubernetes.io/projected/56c90a64-d883-4865-a393-2a9aec8e43a8-kube-api-access-hc5vx\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.838353 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.841979 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.843253 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.848285 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.848584 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c90a64-d883-4865-a393-2a9aec8e43a8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.855706 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc5vx\" (UniqueName: \"kubernetes.io/projected/56c90a64-d883-4865-a393-2a9aec8e43a8-kube-api-access-hc5vx\") pod \"nova-cell1-novncproxy-0\" (UID: \"56c90a64-d883-4865-a393-2a9aec8e43a8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:03 crc kubenswrapper[4736]: I0214 11:04:03.983597 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.306558 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerStarted","Data":"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c"} Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.307079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerStarted","Data":"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49"} Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.362061 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.411406 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75255565-4d85-4d9a-917b-e4d9edd33154" path="/var/lib/kubelet/pods/75255565-4d85-4d9a-917b-e4d9edd33154/volumes" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.496951 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.497473 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.498920 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.536807 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.712213 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78d96c5d8-mfqqp" Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.793046 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.793265 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon-log" containerID="cri-o://6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9" gracePeriod=30 Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.793670 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" containerID="cri-o://622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a" gracePeriod=30 Feb 14 11:04:04 crc kubenswrapper[4736]: I0214 11:04:04.829946 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.321012 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"56c90a64-d883-4865-a393-2a9aec8e43a8","Type":"ContainerStarted","Data":"af0d6bf6b858c3bad6be7c6819493e1212e7beb896ff1b9ec049aee1bb5424c5"} Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.321973 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"56c90a64-d883-4865-a393-2a9aec8e43a8","Type":"ContainerStarted","Data":"8c18b2e5bcefa3c3227aa6daae8e32ba850462557ecc747204d4e194a0d09447"} Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.322082 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.325932 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.339227 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.339208989 podStartE2EDuration="2.339208989s" podCreationTimestamp="2026-02-14 11:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:05.333367799 +0000 UTC m=+1355.701995197" watchObservedRunningTime="2026-02-14 11:04:05.339208989 +0000 UTC m=+1355.707836357" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.521611 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.523054 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.545507 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.583957 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.584265 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.584304 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk2ws\" (UniqueName: \"kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.584322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.584371 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.584402 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689718 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689781 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689818 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk2ws\" (UniqueName: \"kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.689885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.690822 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.691317 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.691397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.693585 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.696314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.709501 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk2ws\" (UniqueName: \"kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws\") pod \"dnsmasq-dns-89c5cd4d5-5ngdn\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:05 crc kubenswrapper[4736]: I0214 11:04:05.848031 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:06 crc kubenswrapper[4736]: I0214 11:04:06.328907 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerStarted","Data":"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504"} Feb 14 11:04:06 crc kubenswrapper[4736]: I0214 11:04:06.361173 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.037340405 podStartE2EDuration="5.361153415s" podCreationTimestamp="2026-02-14 11:04:01 +0000 UTC" firstStartedPulling="2026-02-14 11:04:02.248184239 +0000 UTC m=+1352.616811607" lastFinishedPulling="2026-02-14 11:04:05.571997249 +0000 UTC m=+1355.940624617" observedRunningTime="2026-02-14 11:04:06.355324986 +0000 UTC m=+1356.723952354" watchObservedRunningTime="2026-02-14 11:04:06.361153415 +0000 UTC m=+1356.729780773" Feb 14 11:04:06 crc kubenswrapper[4736]: I0214 11:04:06.452596 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:04:06 crc kubenswrapper[4736]: W0214 11:04:06.466329 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8c617df_907e_46fa_b6be_b0c62b56afc9.slice/crio-3e89c25a3cf5ecb54c9e438151ce483e8cc4a4c7c2ef323c18641ad1b5bd71cd WatchSource:0}: Error finding container 3e89c25a3cf5ecb54c9e438151ce483e8cc4a4c7c2ef323c18641ad1b5bd71cd: Status 404 returned error can't find the container with id 3e89c25a3cf5ecb54c9e438151ce483e8cc4a4c7c2ef323c18641ad1b5bd71cd Feb 14 11:04:07 crc kubenswrapper[4736]: I0214 11:04:07.336551 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerID="0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0" exitCode=0 Feb 14 11:04:07 crc kubenswrapper[4736]: I0214 11:04:07.336821 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" event={"ID":"d8c617df-907e-46fa-b6be-b0c62b56afc9","Type":"ContainerDied","Data":"0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0"} Feb 14 11:04:07 crc kubenswrapper[4736]: I0214 11:04:07.336934 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" event={"ID":"d8c617df-907e-46fa-b6be-b0c62b56afc9","Type":"ContainerStarted","Data":"3e89c25a3cf5ecb54c9e438151ce483e8cc4a4c7c2ef323c18641ad1b5bd71cd"} Feb 14 11:04:07 crc kubenswrapper[4736]: I0214 11:04:07.337283 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.176810 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.245425 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.293335 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:39868->10.217.0.148:8443: read: connection reset by peer" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.367668 4736 generic.go:334] "Generic (PLEG): container finished" podID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerID="622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a" exitCode=0 Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.367780 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerDied","Data":"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a"} Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.367855 4736 scope.go:117] "RemoveContainer" containerID="addd3be5783720e5b80a35ec2a30cd08864d12153bd2d833826a22af62c8838b" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.373556 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-log" containerID="cri-o://62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23" gracePeriod=30 Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.373985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" event={"ID":"d8c617df-907e-46fa-b6be-b0c62b56afc9","Type":"ContainerStarted","Data":"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556"} Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.374315 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-api" containerID="cri-o://8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4" gracePeriod=30 Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.378432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.435301 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" podStartSLOduration=3.435280395 podStartE2EDuration="3.435280395s" podCreationTimestamp="2026-02-14 11:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:08.415932443 +0000 UTC m=+1358.784559811" watchObservedRunningTime="2026-02-14 11:04:08.435280395 +0000 UTC m=+1358.803907763" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.617762 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 11:04:08 crc kubenswrapper[4736]: I0214 11:04:08.984540 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384228 4736 generic.go:334] "Generic (PLEG): container finished" podID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerID="62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23" exitCode=143 Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerDied","Data":"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23"} Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384783 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-central-agent" containerID="cri-o://d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b" gracePeriod=30 Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384814 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="proxy-httpd" containerID="cri-o://f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504" gracePeriod=30 Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384838 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="sg-core" containerID="cri-o://c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c" gracePeriod=30 Feb 14 11:04:09 crc kubenswrapper[4736]: I0214 11:04:09.384850 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-notification-agent" containerID="cri-o://379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49" gracePeriod=30 Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.272640 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433347 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerID="f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504" exitCode=0 Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433600 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerID="c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c" exitCode=2 Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433608 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerID="379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49" exitCode=0 Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerDied","Data":"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504"} Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerDied","Data":"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c"} Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.433819 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerDied","Data":"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49"} Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.804347 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907122 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907349 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907462 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lskhh\" (UniqueName: \"kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907567 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907656 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907733 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.907921 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.908021 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.908148 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts\") pod \"bbd4d109-f507-4b38-9091-01acf7e19cc1\" (UID: \"bbd4d109-f507-4b38-9091-01acf7e19cc1\") " Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.908515 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.908691 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.908785 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd4d109-f507-4b38-9091-01acf7e19cc1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.921678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh" (OuterVolumeSpecName: "kube-api-access-lskhh") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "kube-api-access-lskhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.928922 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts" (OuterVolumeSpecName: "scripts") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.946812 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:10 crc kubenswrapper[4736]: I0214 11:04:10.986870 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.012377 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.012411 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.012422 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lskhh\" (UniqueName: \"kubernetes.io/projected/bbd4d109-f507-4b38-9091-01acf7e19cc1-kube-api-access-lskhh\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.012434 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.017335 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.057326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data" (OuterVolumeSpecName: "config-data") pod "bbd4d109-f507-4b38-9091-01acf7e19cc1" (UID: "bbd4d109-f507-4b38-9091-01acf7e19cc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.113952 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.113998 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd4d109-f507-4b38-9091-01acf7e19cc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.450490 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerID="d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b" exitCode=0 Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.450554 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerDied","Data":"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b"} Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.450631 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbd4d109-f507-4b38-9091-01acf7e19cc1","Type":"ContainerDied","Data":"a4df440b15732adfe6fcfabac6e9b2ebf5f623fc83d89feec5492800be13f654"} Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.450663 4736 scope.go:117] "RemoveContainer" containerID="f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.450865 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.499604 4736 scope.go:117] "RemoveContainer" containerID="c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.541025 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.555769 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.566677 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.567128 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="sg-core" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567151 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="sg-core" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.567181 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-notification-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567189 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-notification-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.567206 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="proxy-httpd" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567213 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="proxy-httpd" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.567235 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-central-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567243 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-central-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567457 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="proxy-httpd" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567472 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-notification-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567492 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="sg-core" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.567502 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" containerName="ceilometer-central-agent" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.570233 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.573461 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.573810 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.578887 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.588529 4736 scope.go:117] "RemoveContainer" containerID="379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.597061 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.623272 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-run-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.623323 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhc8\" (UniqueName: \"kubernetes.io/projected/0e312a4e-321c-45f9-a15b-b41e8a500356-kube-api-access-gqhc8\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.623837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.623987 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-log-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.624051 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-config-data\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.624089 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-scripts\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.624147 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.624167 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.678168 4736 scope.go:117] "RemoveContainer" containerID="d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730233 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-scripts\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730283 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730326 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730556 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-run-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730600 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhc8\" (UniqueName: \"kubernetes.io/projected/0e312a4e-321c-45f9-a15b-b41e8a500356-kube-api-access-gqhc8\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730661 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730788 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-log-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.730818 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-config-data\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.732155 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-run-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.736939 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e312a4e-321c-45f9-a15b-b41e8a500356-log-httpd\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.739211 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-config-data\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.759694 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.764894 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhc8\" (UniqueName: \"kubernetes.io/projected/0e312a4e-321c-45f9-a15b-b41e8a500356-kube-api-access-gqhc8\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.765089 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-scripts\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.760244 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.765820 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e312a4e-321c-45f9-a15b-b41e8a500356-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e312a4e-321c-45f9-a15b-b41e8a500356\") " pod="openstack/ceilometer-0" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.766616 4736 scope.go:117] "RemoveContainer" containerID="f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.769132 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504\": container with ID starting with f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504 not found: ID does not exist" containerID="f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.769871 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504"} err="failed to get container status \"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504\": rpc error: code = NotFound desc = could not find container \"f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504\": container with ID starting with f75ac1d4050449e22f7f10e429facb7ad50f6d58364aaaedb18b6087a382c504 not found: ID does not exist" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.769985 4736 scope.go:117] "RemoveContainer" containerID="c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.772772 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c\": container with ID starting with c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c not found: ID does not exist" containerID="c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.773064 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c"} err="failed to get container status \"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c\": rpc error: code = NotFound desc = could not find container \"c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c\": container with ID starting with c81b9686720d9207d0073e72cc4bc27c4a85d25e458933220b760c3643f2de6c not found: ID does not exist" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.773199 4736 scope.go:117] "RemoveContainer" containerID="379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.773557 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49\": container with ID starting with 379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49 not found: ID does not exist" containerID="379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.773717 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49"} err="failed to get container status \"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49\": rpc error: code = NotFound desc = could not find container \"379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49\": container with ID starting with 379c8e914639d0f1e07c64b44115e70b828b4db513f1171c2f70d8f571301a49 not found: ID does not exist" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.773842 4736 scope.go:117] "RemoveContainer" containerID="d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.774187 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b\": container with ID starting with d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b not found: ID does not exist" containerID="d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.774485 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b"} err="failed to get container status \"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b\": rpc error: code = NotFound desc = could not find container \"d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b\": container with ID starting with d51effe2f28f479cd92df777c319e74e84b8f6c942d1a2148318824f162a7f9b not found: ID does not exist" Feb 14 11:04:11 crc kubenswrapper[4736]: E0214 11:04:11.882616 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54a3af93_90c2_4d02_a7df_e60a0d7db585.slice/crio-conmon-8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54a3af93_90c2_4d02_a7df_e60a0d7db585.slice/crio-8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4.scope\": RecentStats: unable to find data in memory cache]" Feb 14 11:04:11 crc kubenswrapper[4736]: I0214 11:04:11.976676 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.004782 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.038249 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle\") pod \"54a3af93-90c2-4d02-a7df-e60a0d7db585\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.038366 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frz47\" (UniqueName: \"kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47\") pod \"54a3af93-90c2-4d02-a7df-e60a0d7db585\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.038419 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data\") pod \"54a3af93-90c2-4d02-a7df-e60a0d7db585\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.038460 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs\") pod \"54a3af93-90c2-4d02-a7df-e60a0d7db585\" (UID: \"54a3af93-90c2-4d02-a7df-e60a0d7db585\") " Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.039874 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs" (OuterVolumeSpecName: "logs") pod "54a3af93-90c2-4d02-a7df-e60a0d7db585" (UID: "54a3af93-90c2-4d02-a7df-e60a0d7db585"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.051762 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47" (OuterVolumeSpecName: "kube-api-access-frz47") pod "54a3af93-90c2-4d02-a7df-e60a0d7db585" (UID: "54a3af93-90c2-4d02-a7df-e60a0d7db585"). InnerVolumeSpecName "kube-api-access-frz47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.095955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54a3af93-90c2-4d02-a7df-e60a0d7db585" (UID: "54a3af93-90c2-4d02-a7df-e60a0d7db585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.120140 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data" (OuterVolumeSpecName: "config-data") pod "54a3af93-90c2-4d02-a7df-e60a0d7db585" (UID: "54a3af93-90c2-4d02-a7df-e60a0d7db585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.140439 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frz47\" (UniqueName: \"kubernetes.io/projected/54a3af93-90c2-4d02-a7df-e60a0d7db585-kube-api-access-frz47\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.140467 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.140476 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54a3af93-90c2-4d02-a7df-e60a0d7db585-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.140485 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a3af93-90c2-4d02-a7df-e60a0d7db585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.406688 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd4d109-f507-4b38-9091-01acf7e19cc1" path="/var/lib/kubelet/pods/bbd4d109-f507-4b38-9091-01acf7e19cc1/volumes" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.460683 4736 generic.go:334] "Generic (PLEG): container finished" podID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerID="8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4" exitCode=0 Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.460731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerDied","Data":"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4"} Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.461028 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54a3af93-90c2-4d02-a7df-e60a0d7db585","Type":"ContainerDied","Data":"a550a5b7ad9b5d576341771c78c69c77bbefaefb99a41de2173017d061de9c45"} Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.461047 4736 scope.go:117] "RemoveContainer" containerID="8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.461138 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.489492 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.503560 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.510833 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:12 crc kubenswrapper[4736]: E0214 11:04:12.511229 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-log" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.511241 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-log" Feb 14 11:04:12 crc kubenswrapper[4736]: E0214 11:04:12.511251 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-api" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.511257 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-api" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.511422 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-api" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.511439 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" containerName="nova-api-log" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.512307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.513950 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.514237 4736 scope.go:117] "RemoveContainer" containerID="62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.514448 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.514662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.541543 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.548843 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.548914 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.548965 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph4h6\" (UniqueName: \"kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.548989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.549055 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.549094 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.549187 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.564856 4736 scope.go:117] "RemoveContainer" containerID="8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4" Feb 14 11:04:12 crc kubenswrapper[4736]: E0214 11:04:12.565644 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4\": container with ID starting with 8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4 not found: ID does not exist" containerID="8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.565675 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4"} err="failed to get container status \"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4\": rpc error: code = NotFound desc = could not find container \"8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4\": container with ID starting with 8e28c6286628a8c5a4e753cd5adaf47127fc8e0fc438d1d6705b00fc58fdf0f4 not found: ID does not exist" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.565695 4736 scope.go:117] "RemoveContainer" containerID="62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23" Feb 14 11:04:12 crc kubenswrapper[4736]: E0214 11:04:12.568860 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23\": container with ID starting with 62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23 not found: ID does not exist" containerID="62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.568897 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23"} err="failed to get container status \"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23\": rpc error: code = NotFound desc = could not find container \"62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23\": container with ID starting with 62220547c69949d066fe26f6a97d26a6cbbea08a87e2101a6caa481917bbef23 not found: ID does not exist" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651393 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651511 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651583 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph4h6\" (UniqueName: \"kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.651607 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.652455 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.655241 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.655892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.656876 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.658176 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.668908 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph4h6\" (UniqueName: \"kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6\") pod \"nova-api-0\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " pod="openstack/nova-api-0" Feb 14 11:04:12 crc kubenswrapper[4736]: I0214 11:04:12.833370 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.303093 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:13 crc kubenswrapper[4736]: W0214 11:04:13.306525 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4b2a0d6_c156_459b_a19b_c5cd41fbc336.slice/crio-33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16 WatchSource:0}: Error finding container 33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16: Status 404 returned error can't find the container with id 33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16 Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.476883 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerStarted","Data":"2e0a3cc8ce29d1b26d9d491e58885badde645f4c279e2f3a48002dfc45a2c9cd"} Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.476930 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerStarted","Data":"33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16"} Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.480296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e312a4e-321c-45f9-a15b-b41e8a500356","Type":"ContainerStarted","Data":"4c9fbc65c0efac3195b972175d615b3da492f712c2e9b5aaacaa41d6ae536f37"} Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.480360 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e312a4e-321c-45f9-a15b-b41e8a500356","Type":"ContainerStarted","Data":"259cfb073d9ca7a9e439be41ad4caed427db8b10b643b98e99e8a8ceea3c1d65"} Feb 14 11:04:13 crc kubenswrapper[4736]: I0214 11:04:13.985189 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.003435 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.409885 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54a3af93-90c2-4d02-a7df-e60a0d7db585" path="/var/lib/kubelet/pods/54a3af93-90c2-4d02-a7df-e60a0d7db585/volumes" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.492504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e312a4e-321c-45f9-a15b-b41e8a500356","Type":"ContainerStarted","Data":"dd8d937d3a91a4decf4e012886993296d9cca7830d7a6632c487572b0451700e"} Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.492540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e312a4e-321c-45f9-a15b-b41e8a500356","Type":"ContainerStarted","Data":"94a46cae6da5b1600fa579643b011e01d1b197f6739f62b0c871496968344574"} Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.494525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerStarted","Data":"3cdb8607c5cbae7e4b06244c2450359d41343e1eebfff587938affac4a32225b"} Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.512520 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.525816 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.525796706 podStartE2EDuration="2.525796706s" podCreationTimestamp="2026-02-14 11:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:14.520071419 +0000 UTC m=+1364.888698787" watchObservedRunningTime="2026-02-14 11:04:14.525796706 +0000 UTC m=+1364.894424074" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.687447 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wb8dk"] Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.688730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.694532 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.694602 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.701171 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wb8dk"] Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.791008 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.791073 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.791134 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6k7\" (UniqueName: \"kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.791218 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.893263 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.893339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.893381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.893438 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6k7\" (UniqueName: \"kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.898418 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.900223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.900797 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:14 crc kubenswrapper[4736]: I0214 11:04:14.913599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6k7\" (UniqueName: \"kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7\") pod \"nova-cell1-cell-mapping-wb8dk\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.010135 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.484288 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wb8dk"] Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.506282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wb8dk" event={"ID":"ec8f51b8-16c7-4d32-9595-199616101d23","Type":"ContainerStarted","Data":"31aed8e42e7955cbeb7dad37de1bce7d09d289af748f5c07d7b6add2e5d2da20"} Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.857926 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.941780 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:04:15 crc kubenswrapper[4736]: I0214 11:04:15.942002 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="dnsmasq-dns" containerID="cri-o://6d6f0fa7f10557fdf58e635daf79ff9ac56ecb1568a2e1ef7528b22d1c357285" gracePeriod=10 Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.518269 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wb8dk" event={"ID":"ec8f51b8-16c7-4d32-9595-199616101d23","Type":"ContainerStarted","Data":"0764dedc1916aca33f037a0e407984fed429279d56b12e232cce076b77ae6bf4"} Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.522632 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerID="6d6f0fa7f10557fdf58e635daf79ff9ac56ecb1568a2e1ef7528b22d1c357285" exitCode=0 Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.522681 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" event={"ID":"e0d4ecd4-8be3-4c23-ae88-f93464271353","Type":"ContainerDied","Data":"6d6f0fa7f10557fdf58e635daf79ff9ac56ecb1568a2e1ef7528b22d1c357285"} Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.522699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" event={"ID":"e0d4ecd4-8be3-4c23-ae88-f93464271353","Type":"ContainerDied","Data":"8ed96a6a30354bc4ef31bd4e8da6b853c7a7d5267482f12383080b8800f852a8"} Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.522709 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed96a6a30354bc4ef31bd4e8da6b853c7a7d5267482f12383080b8800f852a8" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.526729 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.538446 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wb8dk" podStartSLOduration=2.538431791 podStartE2EDuration="2.538431791s" podCreationTimestamp="2026-02-14 11:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:16.535012581 +0000 UTC m=+1366.903639949" watchObservedRunningTime="2026-02-14 11:04:16.538431791 +0000 UTC m=+1366.907059159" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.551330 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e312a4e-321c-45f9-a15b-b41e8a500356","Type":"ContainerStarted","Data":"908fee19fc7c5171540d35d141292e6ba4a74c477e190adc3072ddda23bdabe1"} Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.552293 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.587469 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.427527864 podStartE2EDuration="5.587449894s" podCreationTimestamp="2026-02-14 11:04:11 +0000 UTC" firstStartedPulling="2026-02-14 11:04:12.528802046 +0000 UTC m=+1362.897429414" lastFinishedPulling="2026-02-14 11:04:15.688724076 +0000 UTC m=+1366.057351444" observedRunningTime="2026-02-14 11:04:16.575459506 +0000 UTC m=+1366.944086874" watchObservedRunningTime="2026-02-14 11:04:16.587449894 +0000 UTC m=+1366.956077262" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635105 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635206 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635255 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9z6w\" (UniqueName: \"kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635322 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635399 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.635435 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc\") pod \"e0d4ecd4-8be3-4c23-ae88-f93464271353\" (UID: \"e0d4ecd4-8be3-4c23-ae88-f93464271353\") " Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.663912 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w" (OuterVolumeSpecName: "kube-api-access-r9z6w") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "kube-api-access-r9z6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.725370 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config" (OuterVolumeSpecName: "config") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.733101 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.738335 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.738375 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9z6w\" (UniqueName: \"kubernetes.io/projected/e0d4ecd4-8be3-4c23-ae88-f93464271353-kube-api-access-r9z6w\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.738388 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.738447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.743808 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.760922 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0d4ecd4-8be3-4c23-ae88-f93464271353" (UID: "e0d4ecd4-8be3-4c23-ae88-f93464271353"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.839789 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.839826 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:16 crc kubenswrapper[4736]: I0214 11:04:16.839836 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0d4ecd4-8be3-4c23-ae88-f93464271353-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:17 crc kubenswrapper[4736]: I0214 11:04:17.558271 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dt4b7" Feb 14 11:04:17 crc kubenswrapper[4736]: I0214 11:04:17.603958 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:04:17 crc kubenswrapper[4736]: I0214 11:04:17.616753 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dt4b7"] Feb 14 11:04:18 crc kubenswrapper[4736]: I0214 11:04:18.409388 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" path="/var/lib/kubelet/pods/e0d4ecd4-8be3-4c23-ae88-f93464271353/volumes" Feb 14 11:04:20 crc kubenswrapper[4736]: I0214 11:04:20.272661 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:04:21 crc kubenswrapper[4736]: I0214 11:04:21.612594 4736 generic.go:334] "Generic (PLEG): container finished" podID="ec8f51b8-16c7-4d32-9595-199616101d23" containerID="0764dedc1916aca33f037a0e407984fed429279d56b12e232cce076b77ae6bf4" exitCode=0 Feb 14 11:04:21 crc kubenswrapper[4736]: I0214 11:04:21.612675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wb8dk" event={"ID":"ec8f51b8-16c7-4d32-9595-199616101d23","Type":"ContainerDied","Data":"0764dedc1916aca33f037a0e407984fed429279d56b12e232cce076b77ae6bf4"} Feb 14 11:04:22 crc kubenswrapper[4736]: I0214 11:04:22.834509 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:04:22 crc kubenswrapper[4736]: I0214 11:04:22.834571 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.082337 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.204709 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts\") pod \"ec8f51b8-16c7-4d32-9595-199616101d23\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.204822 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data\") pod \"ec8f51b8-16c7-4d32-9595-199616101d23\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.204911 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l6k7\" (UniqueName: \"kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7\") pod \"ec8f51b8-16c7-4d32-9595-199616101d23\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.205144 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle\") pod \"ec8f51b8-16c7-4d32-9595-199616101d23\" (UID: \"ec8f51b8-16c7-4d32-9595-199616101d23\") " Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.211836 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts" (OuterVolumeSpecName: "scripts") pod "ec8f51b8-16c7-4d32-9595-199616101d23" (UID: "ec8f51b8-16c7-4d32-9595-199616101d23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.214729 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7" (OuterVolumeSpecName: "kube-api-access-4l6k7") pod "ec8f51b8-16c7-4d32-9595-199616101d23" (UID: "ec8f51b8-16c7-4d32-9595-199616101d23"). InnerVolumeSpecName "kube-api-access-4l6k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.233931 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data" (OuterVolumeSpecName: "config-data") pod "ec8f51b8-16c7-4d32-9595-199616101d23" (UID: "ec8f51b8-16c7-4d32-9595-199616101d23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.246558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec8f51b8-16c7-4d32-9595-199616101d23" (UID: "ec8f51b8-16c7-4d32-9595-199616101d23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.307325 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.307360 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.307368 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8f51b8-16c7-4d32-9595-199616101d23-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.307376 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l6k7\" (UniqueName: \"kubernetes.io/projected/ec8f51b8-16c7-4d32-9595-199616101d23-kube-api-access-4l6k7\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.640358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wb8dk" event={"ID":"ec8f51b8-16c7-4d32-9595-199616101d23","Type":"ContainerDied","Data":"31aed8e42e7955cbeb7dad37de1bce7d09d289af748f5c07d7b6add2e5d2da20"} Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.640410 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31aed8e42e7955cbeb7dad37de1bce7d09d289af748f5c07d7b6add2e5d2da20" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.640427 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wb8dk" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.841033 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.841589 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a532c856-f3f7-4d38-a310-83a3df9bcae6" containerName="nova-scheduler-scheduler" containerID="cri-o://98c1879ddfcbe3b47f5d4dafb1399178cedcd119ea55e855971af8a90d9c271b" gracePeriod=30 Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.851004 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.851069 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.857909 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.858285 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-log" containerID="cri-o://2e0a3cc8ce29d1b26d9d491e58885badde645f4c279e2f3a48002dfc45a2c9cd" gracePeriod=30 Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.858341 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-api" containerID="cri-o://3cdb8607c5cbae7e4b06244c2450359d41343e1eebfff587938affac4a32225b" gracePeriod=30 Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.880783 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.881096 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" containerID="cri-o://cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e" gracePeriod=30 Feb 14 11:04:23 crc kubenswrapper[4736]: I0214 11:04:23.881302 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" containerID="cri-o://4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1" gracePeriod=30 Feb 14 11:04:24 crc kubenswrapper[4736]: I0214 11:04:24.662640 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerID="2e0a3cc8ce29d1b26d9d491e58885badde645f4c279e2f3a48002dfc45a2c9cd" exitCode=143 Feb 14 11:04:24 crc kubenswrapper[4736]: I0214 11:04:24.662709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerDied","Data":"2e0a3cc8ce29d1b26d9d491e58885badde645f4c279e2f3a48002dfc45a2c9cd"} Feb 14 11:04:24 crc kubenswrapper[4736]: I0214 11:04:24.665059 4736 generic.go:334] "Generic (PLEG): container finished" podID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerID="cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e" exitCode=143 Feb 14 11:04:24 crc kubenswrapper[4736]: I0214 11:04:24.665217 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerDied","Data":"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e"} Feb 14 11:04:25 crc kubenswrapper[4736]: I0214 11:04:25.692501 4736 generic.go:334] "Generic (PLEG): container finished" podID="a532c856-f3f7-4d38-a310-83a3df9bcae6" containerID="98c1879ddfcbe3b47f5d4dafb1399178cedcd119ea55e855971af8a90d9c271b" exitCode=0 Feb 14 11:04:25 crc kubenswrapper[4736]: I0214 11:04:25.692661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a532c856-f3f7-4d38-a310-83a3df9bcae6","Type":"ContainerDied","Data":"98c1879ddfcbe3b47f5d4dafb1399178cedcd119ea55e855971af8a90d9c271b"} Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.068953 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.189411 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data\") pod \"a532c856-f3f7-4d38-a310-83a3df9bcae6\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.189507 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmxm\" (UniqueName: \"kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm\") pod \"a532c856-f3f7-4d38-a310-83a3df9bcae6\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.189700 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle\") pod \"a532c856-f3f7-4d38-a310-83a3df9bcae6\" (UID: \"a532c856-f3f7-4d38-a310-83a3df9bcae6\") " Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.199420 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm" (OuterVolumeSpecName: "kube-api-access-6wmxm") pod "a532c856-f3f7-4d38-a310-83a3df9bcae6" (UID: "a532c856-f3f7-4d38-a310-83a3df9bcae6"). InnerVolumeSpecName "kube-api-access-6wmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.224160 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a532c856-f3f7-4d38-a310-83a3df9bcae6" (UID: "a532c856-f3f7-4d38-a310-83a3df9bcae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.231488 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data" (OuterVolumeSpecName: "config-data") pod "a532c856-f3f7-4d38-a310-83a3df9bcae6" (UID: "a532c856-f3f7-4d38-a310-83a3df9bcae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.292039 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.292252 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a532c856-f3f7-4d38-a310-83a3df9bcae6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.292333 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wmxm\" (UniqueName: \"kubernetes.io/projected/a532c856-f3f7-4d38-a310-83a3df9bcae6-kube-api-access-6wmxm\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.704368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a532c856-f3f7-4d38-a310-83a3df9bcae6","Type":"ContainerDied","Data":"ab19fa8315c17f8328f27bdd9f096891069dd536c72c72a561a0cd10a2ea871b"} Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.704419 4736 scope.go:117] "RemoveContainer" containerID="98c1879ddfcbe3b47f5d4dafb1399178cedcd119ea55e855971af8a90d9c271b" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.704541 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.740579 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.755133 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.776363 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:26 crc kubenswrapper[4736]: E0214 11:04:26.776987 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8f51b8-16c7-4d32-9595-199616101d23" containerName="nova-manage" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777014 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8f51b8-16c7-4d32-9595-199616101d23" containerName="nova-manage" Feb 14 11:04:26 crc kubenswrapper[4736]: E0214 11:04:26.777047 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="init" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777059 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="init" Feb 14 11:04:26 crc kubenswrapper[4736]: E0214 11:04:26.777079 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="dnsmasq-dns" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777090 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="dnsmasq-dns" Feb 14 11:04:26 crc kubenswrapper[4736]: E0214 11:04:26.777127 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a532c856-f3f7-4d38-a310-83a3df9bcae6" containerName="nova-scheduler-scheduler" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777140 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a532c856-f3f7-4d38-a310-83a3df9bcae6" containerName="nova-scheduler-scheduler" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777459 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8f51b8-16c7-4d32-9595-199616101d23" containerName="nova-manage" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777495 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a532c856-f3f7-4d38-a310-83a3df9bcae6" containerName="nova-scheduler-scheduler" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.777523 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d4ecd4-8be3-4c23-ae88-f93464271353" containerName="dnsmasq-dns" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.778520 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.781664 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.788606 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.902085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bmd\" (UniqueName: \"kubernetes.io/projected/394f1f5f-0af2-4451-b497-6f15295099a4-kube-api-access-b5bmd\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.902158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:26 crc kubenswrapper[4736]: I0214 11:04:26.902479 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-config-data\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.004327 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bmd\" (UniqueName: \"kubernetes.io/projected/394f1f5f-0af2-4451-b497-6f15295099a4-kube-api-access-b5bmd\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.004427 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.004556 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-config-data\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.010526 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.012523 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394f1f5f-0af2-4451-b497-6f15295099a4-config-data\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.039436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bmd\" (UniqueName: \"kubernetes.io/projected/394f1f5f-0af2-4451-b497-6f15295099a4-kube-api-access-b5bmd\") pod \"nova-scheduler-0\" (UID: \"394f1f5f-0af2-4451-b497-6f15295099a4\") " pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.057179 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:53004->10.217.0.197:8775: read: connection reset by peer" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.057176 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:53014->10.217.0.197:8775: read: connection reset by peer" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.098264 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.543102 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.614234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs\") pod \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.614310 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs\") pod \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.614339 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fp7z\" (UniqueName: \"kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z\") pod \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.614403 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle\") pod \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.614761 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs" (OuterVolumeSpecName: "logs") pod "0820df7b-fee4-438e-96bb-0dc1b3da39dc" (UID: "0820df7b-fee4-438e-96bb-0dc1b3da39dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.615079 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data\") pod \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\" (UID: \"0820df7b-fee4-438e-96bb-0dc1b3da39dc\") " Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.615428 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0820df7b-fee4-438e-96bb-0dc1b3da39dc-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.627138 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z" (OuterVolumeSpecName: "kube-api-access-4fp7z") pod "0820df7b-fee4-438e-96bb-0dc1b3da39dc" (UID: "0820df7b-fee4-438e-96bb-0dc1b3da39dc"). InnerVolumeSpecName "kube-api-access-4fp7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.655615 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data" (OuterVolumeSpecName: "config-data") pod "0820df7b-fee4-438e-96bb-0dc1b3da39dc" (UID: "0820df7b-fee4-438e-96bb-0dc1b3da39dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.660366 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.668345 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0820df7b-fee4-438e-96bb-0dc1b3da39dc" (UID: "0820df7b-fee4-438e-96bb-0dc1b3da39dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.708915 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0820df7b-fee4-438e-96bb-0dc1b3da39dc" (UID: "0820df7b-fee4-438e-96bb-0dc1b3da39dc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.713391 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"394f1f5f-0af2-4451-b497-6f15295099a4","Type":"ContainerStarted","Data":"cf01dec84dca6df5e81ab6195a1f013e811172dc0866a65d475bfa7fca449fab"} Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.716233 4736 generic.go:334] "Generic (PLEG): container finished" podID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerID="4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1" exitCode=0 Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.716273 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerDied","Data":"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1"} Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.716324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0820df7b-fee4-438e-96bb-0dc1b3da39dc","Type":"ContainerDied","Data":"e7781c7d462b2db51b636765963d41e1bac5b7392d98fef6dbacb9bb710f2cc0"} Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.716346 4736 scope.go:117] "RemoveContainer" containerID="4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.716557 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.721168 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.723083 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.723112 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fp7z\" (UniqueName: \"kubernetes.io/projected/0820df7b-fee4-438e-96bb-0dc1b3da39dc-kube-api-access-4fp7z\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.723127 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820df7b-fee4-438e-96bb-0dc1b3da39dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.762472 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.762793 4736 scope.go:117] "RemoveContainer" containerID="cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.773196 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.795021 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:27 crc kubenswrapper[4736]: E0214 11:04:27.795733 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.795847 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" Feb 14 11:04:27 crc kubenswrapper[4736]: E0214 11:04:27.795945 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.796008 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.796231 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-metadata" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.796298 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" containerName="nova-metadata-log" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.797343 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.801436 4736 scope.go:117] "RemoveContainer" containerID="4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.801539 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.802001 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 11:04:27 crc kubenswrapper[4736]: E0214 11:04:27.802629 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1\": container with ID starting with 4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1 not found: ID does not exist" containerID="4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.802671 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1"} err="failed to get container status \"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1\": rpc error: code = NotFound desc = could not find container \"4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1\": container with ID starting with 4043e9133e480c5cb15e5da7cecf17e884edbc79883c423b54971543205733e1 not found: ID does not exist" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.802705 4736 scope.go:117] "RemoveContainer" containerID="cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.802751 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:27 crc kubenswrapper[4736]: E0214 11:04:27.807884 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e\": container with ID starting with cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e not found: ID does not exist" containerID="cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.807921 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e"} err="failed to get container status \"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e\": rpc error: code = NotFound desc = could not find container \"cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e\": container with ID starting with cf49343a7b4bd48b59d49857acf5748a321bd5fe1f53458d05a80b1eda9edf7e not found: ID does not exist" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.944173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/501f8e75-5b0d-4226-b3d4-3ac92c58911c-logs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.944471 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.944536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtfd\" (UniqueName: \"kubernetes.io/projected/501f8e75-5b0d-4226-b3d4-3ac92c58911c-kube-api-access-zxtfd\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.944688 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-config-data\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:27 crc kubenswrapper[4736]: I0214 11:04:27.945004 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.046494 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtfd\" (UniqueName: \"kubernetes.io/projected/501f8e75-5b0d-4226-b3d4-3ac92c58911c-kube-api-access-zxtfd\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.046548 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-config-data\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.046629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.046656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/501f8e75-5b0d-4226-b3d4-3ac92c58911c-logs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.046695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.047278 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/501f8e75-5b0d-4226-b3d4-3ac92c58911c-logs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.050799 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.050880 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-config-data\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.052456 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/501f8e75-5b0d-4226-b3d4-3ac92c58911c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.067161 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtfd\" (UniqueName: \"kubernetes.io/projected/501f8e75-5b0d-4226-b3d4-3ac92c58911c-kube-api-access-zxtfd\") pod \"nova-metadata-0\" (UID: \"501f8e75-5b0d-4226-b3d4-3ac92c58911c\") " pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.182772 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.409000 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0820df7b-fee4-438e-96bb-0dc1b3da39dc" path="/var/lib/kubelet/pods/0820df7b-fee4-438e-96bb-0dc1b3da39dc/volumes" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.410317 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a532c856-f3f7-4d38-a310-83a3df9bcae6" path="/var/lib/kubelet/pods/a532c856-f3f7-4d38-a310-83a3df9bcae6/volumes" Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.627775 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.741431 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"394f1f5f-0af2-4451-b497-6f15295099a4","Type":"ContainerStarted","Data":"385f146515ae58980c1a5e8f634cbbd205818cd3d18aa4f1fae6037f7a9adf8b"} Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.743469 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"501f8e75-5b0d-4226-b3d4-3ac92c58911c","Type":"ContainerStarted","Data":"60bfc838e47e09a534e496bd259910b0e1d2791cd20967c30947898f09f34f13"} Feb 14 11:04:28 crc kubenswrapper[4736]: I0214 11:04:28.768161 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.768137535 podStartE2EDuration="2.768137535s" podCreationTimestamp="2026-02-14 11:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:28.763347106 +0000 UTC m=+1379.131974474" watchObservedRunningTime="2026-02-14 11:04:28.768137535 +0000 UTC m=+1379.136764943" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.756226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"501f8e75-5b0d-4226-b3d4-3ac92c58911c","Type":"ContainerStarted","Data":"7aca8508ce597c3f2d309d0431ac5aad16776428698b7ea9d421ed8336217652"} Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.756678 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"501f8e75-5b0d-4226-b3d4-3ac92c58911c","Type":"ContainerStarted","Data":"77418fa2a39eaa39b786283fa8ff3f4c847af600e0794d94e8b554fcb8f492a9"} Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.760694 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerID="3cdb8607c5cbae7e4b06244c2450359d41343e1eebfff587938affac4a32225b" exitCode=0 Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.761298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerDied","Data":"3cdb8607c5cbae7e4b06244c2450359d41343e1eebfff587938affac4a32225b"} Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.761320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4b2a0d6-c156-459b-a19b-c5cd41fbc336","Type":"ContainerDied","Data":"33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16"} Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.761330 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33aebcaafdb08aa9ec509d2840422cedf375edd305f2bdcb3205f3789bb25b16" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.784183 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.78416445 podStartE2EDuration="2.78416445s" podCreationTimestamp="2026-02-14 11:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:29.773798159 +0000 UTC m=+1380.142425527" watchObservedRunningTime="2026-02-14 11:04:29.78416445 +0000 UTC m=+1380.152791828" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.799816 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892422 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892472 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892579 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892658 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892693 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph4h6\" (UniqueName: \"kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6\") pod \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\" (UID: \"d4b2a0d6-c156-459b-a19b-c5cd41fbc336\") " Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.892871 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs" (OuterVolumeSpecName: "logs") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.893716 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.908041 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6" (OuterVolumeSpecName: "kube-api-access-ph4h6") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "kube-api-access-ph4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.922914 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data" (OuterVolumeSpecName: "config-data") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.924852 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.953235 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.964635 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4b2a0d6-c156-459b-a19b-c5cd41fbc336" (UID: "d4b2a0d6-c156-459b-a19b-c5cd41fbc336"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.995720 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.995776 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.995806 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph4h6\" (UniqueName: \"kubernetes.io/projected/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-kube-api-access-ph4h6\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.995815 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:29 crc kubenswrapper[4736]: I0214 11:04:29.995824 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4b2a0d6-c156-459b-a19b-c5cd41fbc336-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.273166 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b8d5f54d-bvjc4" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.776889 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.815000 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.828514 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.840003 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:30 crc kubenswrapper[4736]: E0214 11:04:30.844382 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-api" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.844443 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-api" Feb 14 11:04:30 crc kubenswrapper[4736]: E0214 11:04:30.844516 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-log" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.844527 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-log" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.845386 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-log" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.845428 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" containerName="nova-api-api" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.856891 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.860641 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.860826 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.862184 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 11:04:30 crc kubenswrapper[4736]: I0214 11:04:30.867321 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016045 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056456b3-9911-4a23-9322-7072e9170cbe-logs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016205 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97tm\" (UniqueName: \"kubernetes.io/projected/056456b3-9911-4a23-9322-7072e9170cbe-kube-api-access-l97tm\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016398 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-config-data\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.016719 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-public-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118359 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056456b3-9911-4a23-9322-7072e9170cbe-logs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118418 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l97tm\" (UniqueName: \"kubernetes.io/projected/056456b3-9911-4a23-9322-7072e9170cbe-kube-api-access-l97tm\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118597 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-config-data\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118719 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-public-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.118864 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056456b3-9911-4a23-9322-7072e9170cbe-logs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.126034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.135378 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.135378 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-public-tls-certs\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.139507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056456b3-9911-4a23-9322-7072e9170cbe-config-data\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.139999 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l97tm\" (UniqueName: \"kubernetes.io/projected/056456b3-9911-4a23-9322-7072e9170cbe-kube-api-access-l97tm\") pod \"nova-api-0\" (UID: \"056456b3-9911-4a23-9322-7072e9170cbe\") " pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.187048 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.679259 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 11:04:31 crc kubenswrapper[4736]: I0214 11:04:31.789729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"056456b3-9911-4a23-9322-7072e9170cbe","Type":"ContainerStarted","Data":"ef9319d6da35fabfef7a13a31cf5f2c60a3379a6c1efd2954c922e0c83b777e8"} Feb 14 11:04:32 crc kubenswrapper[4736]: I0214 11:04:32.099212 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 11:04:32 crc kubenswrapper[4736]: I0214 11:04:32.409783 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b2a0d6-c156-459b-a19b-c5cd41fbc336" path="/var/lib/kubelet/pods/d4b2a0d6-c156-459b-a19b-c5cd41fbc336/volumes" Feb 14 11:04:32 crc kubenswrapper[4736]: I0214 11:04:32.799792 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"056456b3-9911-4a23-9322-7072e9170cbe","Type":"ContainerStarted","Data":"ee2c205a3a9bd8158a189c56baf42285982782830cfa51c420eb0a777793940e"} Feb 14 11:04:32 crc kubenswrapper[4736]: I0214 11:04:32.799838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"056456b3-9911-4a23-9322-7072e9170cbe","Type":"ContainerStarted","Data":"ea379853be69250185878fbd1130010689b2a805e6afd7287afec73dde74def0"} Feb 14 11:04:32 crc kubenswrapper[4736]: I0214 11:04:32.828641 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8286163269999998 podStartE2EDuration="2.828616327s" podCreationTimestamp="2026-02-14 11:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:04:32.815335531 +0000 UTC m=+1383.183962929" watchObservedRunningTime="2026-02-14 11:04:32.828616327 +0000 UTC m=+1383.197243725" Feb 14 11:04:33 crc kubenswrapper[4736]: I0214 11:04:33.183632 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 11:04:33 crc kubenswrapper[4736]: I0214 11:04:33.184041 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.191638 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.335810 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.335918 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.335975 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.336532 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs" (OuterVolumeSpecName: "logs") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.336643 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.336735 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.336850 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.336889 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts\") pod \"7d33f3d6-2722-42c8-b996-4e80eb75860a\" (UID: \"7d33f3d6-2722-42c8-b996-4e80eb75860a\") " Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.337578 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d33f3d6-2722-42c8-b996-4e80eb75860a-logs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.341332 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv" (OuterVolumeSpecName: "kube-api-access-4nbnv") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "kube-api-access-4nbnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.354053 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.372473 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.389857 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data" (OuterVolumeSpecName: "config-data") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.398112 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts" (OuterVolumeSpecName: "scripts") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.398664 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7d33f3d6-2722-42c8-b996-4e80eb75860a" (UID: "7d33f3d6-2722-42c8-b996-4e80eb75860a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439061 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439091 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439105 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439116 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/7d33f3d6-2722-42c8-b996-4e80eb75860a-kube-api-access-4nbnv\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439127 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d33f3d6-2722-42c8-b996-4e80eb75860a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.439137 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d33f3d6-2722-42c8-b996-4e80eb75860a-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.886073 4736 generic.go:334] "Generic (PLEG): container finished" podID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerID="6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9" exitCode=137 Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.886145 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerDied","Data":"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9"} Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.886199 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b8d5f54d-bvjc4" event={"ID":"7d33f3d6-2722-42c8-b996-4e80eb75860a","Type":"ContainerDied","Data":"f92cb76843e9644ff052bb11c175e5a9526ac0fbad72806d17069a56b766f77c"} Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.886270 4736 scope.go:117] "RemoveContainer" containerID="622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.886466 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b8d5f54d-bvjc4" Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.949827 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:04:35 crc kubenswrapper[4736]: I0214 11:04:35.957488 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54b8d5f54d-bvjc4"] Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.098024 4736 scope.go:117] "RemoveContainer" containerID="6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9" Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.132169 4736 scope.go:117] "RemoveContainer" containerID="622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a" Feb 14 11:04:36 crc kubenswrapper[4736]: E0214 11:04:36.132680 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a\": container with ID starting with 622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a not found: ID does not exist" containerID="622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a" Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.132727 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a"} err="failed to get container status \"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a\": rpc error: code = NotFound desc = could not find container \"622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a\": container with ID starting with 622269b45305c4509f32edeefca9f234253d1aee7a4bc6c72995966dacaf602a not found: ID does not exist" Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.132870 4736 scope.go:117] "RemoveContainer" containerID="6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9" Feb 14 11:04:36 crc kubenswrapper[4736]: E0214 11:04:36.133349 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9\": container with ID starting with 6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9 not found: ID does not exist" containerID="6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9" Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.133429 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9"} err="failed to get container status \"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9\": rpc error: code = NotFound desc = could not find container \"6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9\": container with ID starting with 6d759df50b0ee06def4032c00377ace6cce2427fc981d7d68fec0bdcee8830e9 not found: ID does not exist" Feb 14 11:04:36 crc kubenswrapper[4736]: I0214 11:04:36.417596 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" path="/var/lib/kubelet/pods/7d33f3d6-2722-42c8-b996-4e80eb75860a/volumes" Feb 14 11:04:37 crc kubenswrapper[4736]: I0214 11:04:37.100049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 11:04:37 crc kubenswrapper[4736]: I0214 11:04:37.159057 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 11:04:37 crc kubenswrapper[4736]: I0214 11:04:37.958950 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 11:04:38 crc kubenswrapper[4736]: I0214 11:04:38.183482 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 11:04:38 crc kubenswrapper[4736]: I0214 11:04:38.183530 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 11:04:39 crc kubenswrapper[4736]: I0214 11:04:39.199924 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="501f8e75-5b0d-4226-b3d4-3ac92c58911c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:39 crc kubenswrapper[4736]: I0214 11:04:39.199957 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="501f8e75-5b0d-4226-b3d4-3ac92c58911c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:41 crc kubenswrapper[4736]: I0214 11:04:41.187817 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:04:41 crc kubenswrapper[4736]: I0214 11:04:41.188303 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 11:04:41 crc kubenswrapper[4736]: I0214 11:04:41.987934 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 11:04:42 crc kubenswrapper[4736]: I0214 11:04:42.200044 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="056456b3-9911-4a23-9322-7072e9170cbe" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:42 crc kubenswrapper[4736]: I0214 11:04:42.200044 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="056456b3-9911-4a23-9322-7072e9170cbe" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 11:04:47 crc kubenswrapper[4736]: I0214 11:04:47.695518 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:04:47 crc kubenswrapper[4736]: I0214 11:04:47.696201 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:04:48 crc kubenswrapper[4736]: I0214 11:04:48.187864 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 11:04:48 crc kubenswrapper[4736]: I0214 11:04:48.190179 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 11:04:48 crc kubenswrapper[4736]: I0214 11:04:48.193971 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 11:04:49 crc kubenswrapper[4736]: I0214 11:04:49.037721 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.196836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.197317 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.197631 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.197653 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.206078 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 11:04:51 crc kubenswrapper[4736]: I0214 11:04:51.208946 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 11:04:59 crc kubenswrapper[4736]: I0214 11:04:59.158271 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:00 crc kubenswrapper[4736]: I0214 11:05:00.323901 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:03 crc kubenswrapper[4736]: I0214 11:05:03.164524 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="rabbitmq" containerID="cri-o://ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84" gracePeriod=604796 Feb 14 11:05:04 crc kubenswrapper[4736]: I0214 11:05:04.330628 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="rabbitmq" containerID="cri-o://51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb" gracePeriod=604796 Feb 14 11:05:09 crc kubenswrapper[4736]: I0214 11:05:09.836863 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019509 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019575 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019723 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019776 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019811 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019847 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019877 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019929 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.019970 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8njjl\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.020037 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie\") pod \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\" (UID: \"34ab9b0c-bef8-4c48-9557-89ad8b9d864f\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.021054 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.022462 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.023204 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.027400 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.027976 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info" (OuterVolumeSpecName: "pod-info") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.029730 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.029724 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.062456 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl" (OuterVolumeSpecName: "kube-api-access-8njjl") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "kube-api-access-8njjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.072045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data" (OuterVolumeSpecName: "config-data") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.106133 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf" (OuterVolumeSpecName: "server-conf") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128485 4736 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128532 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128543 4736 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128552 4736 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128561 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8njjl\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-kube-api-access-8njjl\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128571 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128579 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128589 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128597 4736 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.128605 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.152966 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.197535 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "34ab9b0c-bef8-4c48-9557-89ad8b9d864f" (UID: "34ab9b0c-bef8-4c48-9557-89ad8b9d864f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.232821 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34ab9b0c-bef8-4c48-9557-89ad8b9d864f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.232857 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.268684 4736 generic.go:334] "Generic (PLEG): container finished" podID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerID="ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84" exitCode=0 Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.268732 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerDied","Data":"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84"} Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.268786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"34ab9b0c-bef8-4c48-9557-89ad8b9d864f","Type":"ContainerDied","Data":"d60b58724c61f32a700d07a7bcb818986e8364965d0b0172d0db24041cfed337"} Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.268805 4736 scope.go:117] "RemoveContainer" containerID="ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.268940 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.315814 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.316065 4736 scope.go:117] "RemoveContainer" containerID="e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.327028 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.345542 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.346579 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.346673 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.346730 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.346815 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.346879 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="setup-container" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.346923 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="setup-container" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.346977 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="rabbitmq" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347021 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="rabbitmq" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.347087 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347135 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.347192 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon-log" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347239 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon-log" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347438 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347493 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon-log" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347548 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347600 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d33f3d6-2722-42c8-b996-4e80eb75860a" containerName="horizon" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.347662 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" containerName="rabbitmq" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.348664 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.351690 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.351985 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.352075 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.352405 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.352614 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jb45p" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.352719 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.365305 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.368347 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.372606 4736 scope.go:117] "RemoveContainer" containerID="ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.373103 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84\": container with ID starting with ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84 not found: ID does not exist" containerID="ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.373146 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84"} err="failed to get container status \"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84\": rpc error: code = NotFound desc = could not find container \"ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84\": container with ID starting with ad0d42e54301b3b080a037c87818949a823db3643700165f631d86c9192c1d84 not found: ID does not exist" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.373172 4736 scope.go:117] "RemoveContainer" containerID="e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13" Feb 14 11:05:10 crc kubenswrapper[4736]: E0214 11:05:10.373479 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13\": container with ID starting with e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13 not found: ID does not exist" containerID="e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.373502 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13"} err="failed to get container status \"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13\": rpc error: code = NotFound desc = could not find container \"e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13\": container with ID starting with e3eeb7fa34a465ae182d9846ecaa4b431c1a1901f4ccdd49fedc6bf4546efd13 not found: ID does not exist" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.433525 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ab9b0c-bef8-4c48-9557-89ad8b9d864f" path="/var/lib/kubelet/pods/34ab9b0c-bef8-4c48-9557-89ad8b9d864f/volumes" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539105 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539177 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539301 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539329 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539357 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539402 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs88w\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-kube-api-access-hs88w\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539451 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7b1cfbb-0f84-4915-bae6-0bd165726dba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539494 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7b1cfbb-0f84-4915-bae6-0bd165726dba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539572 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.539608 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.644658 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7b1cfbb-0f84-4915-bae6-0bd165726dba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645039 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7b1cfbb-0f84-4915-bae6-0bd165726dba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645083 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645119 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645148 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645362 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645389 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.645429 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs88w\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-kube-api-access-hs88w\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.646094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.646729 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.647411 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.650101 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.650328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7b1cfbb-0f84-4915-bae6-0bd165726dba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.651710 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7b1cfbb-0f84-4915-bae6-0bd165726dba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.651934 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.654331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.657573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7b1cfbb-0f84-4915-bae6-0bd165726dba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.669047 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.671069 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs88w\" (UniqueName: \"kubernetes.io/projected/a7b1cfbb-0f84-4915-bae6-0bd165726dba-kube-api-access-hs88w\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.721685 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a7b1cfbb-0f84-4915-bae6-0bd165726dba\") " pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.843473 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949330 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949412 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949434 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949508 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949557 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949646 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949677 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949715 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949777 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.949801 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdp6r\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r\") pod \"0bb03a69-d572-4b83-97b9-13d33b501b6a\" (UID: \"0bb03a69-d572-4b83-97b9-13d33b501b6a\") " Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.950041 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.950166 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.950412 4736 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.950430 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.950763 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.952727 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.956253 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.956835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.956877 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r" (OuterVolumeSpecName: "kube-api-access-bdp6r") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "kube-api-access-bdp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.959553 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.982193 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 11:05:10 crc kubenswrapper[4736]: I0214 11:05:10.983223 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data" (OuterVolumeSpecName: "config-data") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.013022 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052153 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052184 4736 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bb03a69-d572-4b83-97b9-13d33b501b6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052193 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052201 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052212 4736 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bb03a69-d572-4b83-97b9-13d33b501b6a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052222 4736 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bb03a69-d572-4b83-97b9-13d33b501b6a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052231 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdp6r\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-kube-api-access-bdp6r\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.052240 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.066952 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0bb03a69-d572-4b83-97b9-13d33b501b6a" (UID: "0bb03a69-d572-4b83-97b9-13d33b501b6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.071027 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.155863 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.156118 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bb03a69-d572-4b83-97b9-13d33b501b6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.279730 4736 generic.go:334] "Generic (PLEG): container finished" podID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerID="51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb" exitCode=0 Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.279807 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerDied","Data":"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb"} Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.279858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bb03a69-d572-4b83-97b9-13d33b501b6a","Type":"ContainerDied","Data":"f643b23df5c20f3673c9140b918b909af49890b9fb4f92fcf7f2306957083983"} Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.279876 4736 scope.go:117] "RemoveContainer" containerID="51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.280152 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.319985 4736 scope.go:117] "RemoveContainer" containerID="027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.344805 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.351088 4736 scope.go:117] "RemoveContainer" containerID="51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb" Feb 14 11:05:11 crc kubenswrapper[4736]: E0214 11:05:11.353248 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb\": container with ID starting with 51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb not found: ID does not exist" containerID="51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.353301 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb"} err="failed to get container status \"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb\": rpc error: code = NotFound desc = could not find container \"51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb\": container with ID starting with 51cc046861d6aaad20bbd127973e6c5b3599c86d0231a88b2aa82d6231cc66fb not found: ID does not exist" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.353335 4736 scope.go:117] "RemoveContainer" containerID="027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.357877 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:11 crc kubenswrapper[4736]: E0214 11:05:11.357953 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b\": container with ID starting with 027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b not found: ID does not exist" containerID="027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.358001 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b"} err="failed to get container status \"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b\": rpc error: code = NotFound desc = could not find container \"027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b\": container with ID starting with 027490c157c5cdae2c27f0c9f788d6061e281522a45ba19977d0b0727371215b not found: ID does not exist" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.372220 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:11 crc kubenswrapper[4736]: E0214 11:05:11.372673 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="setup-container" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.372697 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="setup-container" Feb 14 11:05:11 crc kubenswrapper[4736]: E0214 11:05:11.372740 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="rabbitmq" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.372766 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="rabbitmq" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.372995 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" containerName="rabbitmq" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.374577 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.380686 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c8fbh" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.380899 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.381317 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.381435 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.384670 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.384766 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.386100 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.386761 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.467830 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567375 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567655 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567700 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567723 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567781 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070e414c-ea91-48aa-871d-ebfed740c5b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567831 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvr5\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-kube-api-access-pvvr5\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567882 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.567945 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070e414c-ea91-48aa-871d-ebfed740c5b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.568028 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669511 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669601 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669642 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669672 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070e414c-ea91-48aa-871d-ebfed740c5b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669691 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvr5\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-kube-api-access-pvvr5\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669719 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669752 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669772 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070e414c-ea91-48aa-871d-ebfed740c5b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.669841 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.670225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.670256 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.670776 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.670934 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.671211 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.671529 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070e414c-ea91-48aa-871d-ebfed740c5b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.673899 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.676118 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.676228 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070e414c-ea91-48aa-871d-ebfed740c5b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.677622 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070e414c-ea91-48aa-871d-ebfed740c5b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.707926 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvr5\" (UniqueName: \"kubernetes.io/projected/070e414c-ea91-48aa-871d-ebfed740c5b3-kube-api-access-pvvr5\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.709136 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"070e414c-ea91-48aa-871d-ebfed740c5b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:11 crc kubenswrapper[4736]: I0214 11:05:11.995471 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.147013 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.149858 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.152640 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.168111 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.303223 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7b1cfbb-0f84-4915-bae6-0bd165726dba","Type":"ContainerStarted","Data":"5cb3ae271b270e385858bb4bb49efedf714e3754d28b5590dfdcd815a9a7f1ef"} Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313259 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcm7\" (UniqueName: \"kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313319 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313340 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313475 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313557 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.313754 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.407039 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb03a69-d572-4b83-97b9-13d33b501b6a" path="/var/lib/kubelet/pods/0bb03a69-d572-4b83-97b9-13d33b501b6a/volumes" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.415699 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.415768 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.415882 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.415919 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.415944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.416023 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.416055 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcm7\" (UniqueName: \"kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.416621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.416990 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.417090 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.417192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.417352 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.417657 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.443717 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcm7\" (UniqueName: \"kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7\") pod \"dnsmasq-dns-79bd4cc8c9-rfgtl\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.537260 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.582445 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 11:05:12 crc kubenswrapper[4736]: W0214 11:05:12.586941 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070e414c_ea91_48aa_871d_ebfed740c5b3.slice/crio-f330ad634079dc4abf7c8c6373ac5de3c042343f13c87fbebe64cf0841931d69 WatchSource:0}: Error finding container f330ad634079dc4abf7c8c6373ac5de3c042343f13c87fbebe64cf0841931d69: Status 404 returned error can't find the container with id f330ad634079dc4abf7c8c6373ac5de3c042343f13c87fbebe64cf0841931d69 Feb 14 11:05:12 crc kubenswrapper[4736]: I0214 11:05:12.970793 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:12 crc kubenswrapper[4736]: W0214 11:05:12.981476 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee590ab_1eb2_4ec9_ac1c_ff9822a8c74b.slice/crio-30039b08ae57afa3631b440aa13a69f2e065cde3ac753fd22ab8a547bccb96ae WatchSource:0}: Error finding container 30039b08ae57afa3631b440aa13a69f2e065cde3ac753fd22ab8a547bccb96ae: Status 404 returned error can't find the container with id 30039b08ae57afa3631b440aa13a69f2e065cde3ac753fd22ab8a547bccb96ae Feb 14 11:05:13 crc kubenswrapper[4736]: I0214 11:05:13.321495 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070e414c-ea91-48aa-871d-ebfed740c5b3","Type":"ContainerStarted","Data":"f330ad634079dc4abf7c8c6373ac5de3c042343f13c87fbebe64cf0841931d69"} Feb 14 11:05:13 crc kubenswrapper[4736]: I0214 11:05:13.327132 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7b1cfbb-0f84-4915-bae6-0bd165726dba","Type":"ContainerStarted","Data":"790133d90d3ade6091ca340a0389696cc94f6afd00d318966278c491d341661a"} Feb 14 11:05:13 crc kubenswrapper[4736]: I0214 11:05:13.332979 4736 generic.go:334] "Generic (PLEG): container finished" podID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerID="726d9bc12fc0ad3cd2b5e65fe2dc7d724efd722a71936c7790192a73b9077bd5" exitCode=0 Feb 14 11:05:13 crc kubenswrapper[4736]: I0214 11:05:13.333055 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" event={"ID":"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b","Type":"ContainerDied","Data":"726d9bc12fc0ad3cd2b5e65fe2dc7d724efd722a71936c7790192a73b9077bd5"} Feb 14 11:05:13 crc kubenswrapper[4736]: I0214 11:05:13.334106 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" event={"ID":"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b","Type":"ContainerStarted","Data":"30039b08ae57afa3631b440aa13a69f2e065cde3ac753fd22ab8a547bccb96ae"} Feb 14 11:05:13 crc kubenswrapper[4736]: E0214 11:05:13.408493 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee590ab_1eb2_4ec9_ac1c_ff9822a8c74b.slice/crio-conmon-726d9bc12fc0ad3cd2b5e65fe2dc7d724efd722a71936c7790192a73b9077bd5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee590ab_1eb2_4ec9_ac1c_ff9822a8c74b.slice/crio-726d9bc12fc0ad3cd2b5e65fe2dc7d724efd722a71936c7790192a73b9077bd5.scope\": RecentStats: unable to find data in memory cache]" Feb 14 11:05:14 crc kubenswrapper[4736]: I0214 11:05:14.343289 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" event={"ID":"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b","Type":"ContainerStarted","Data":"04d9879aaae2f9350cd52c5dc068f7b44291225390de4bf7caed3b4403cc9d0c"} Feb 14 11:05:14 crc kubenswrapper[4736]: I0214 11:05:14.343692 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:14 crc kubenswrapper[4736]: I0214 11:05:14.346025 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070e414c-ea91-48aa-871d-ebfed740c5b3","Type":"ContainerStarted","Data":"091bebd97cb73f89d571cacd2fa6103fb7408e097e663d1ec8565be0706434c3"} Feb 14 11:05:14 crc kubenswrapper[4736]: I0214 11:05:14.374328 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" podStartSLOduration=2.374309665 podStartE2EDuration="2.374309665s" podCreationTimestamp="2026-02-14 11:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:05:14.36823421 +0000 UTC m=+1424.736861578" watchObservedRunningTime="2026-02-14 11:05:14.374309665 +0000 UTC m=+1424.742937033" Feb 14 11:05:17 crc kubenswrapper[4736]: I0214 11:05:17.695421 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:05:17 crc kubenswrapper[4736]: I0214 11:05:17.696143 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.540105 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.625001 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.625352 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="dnsmasq-dns" containerID="cri-o://e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556" gracePeriod=10 Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.813856 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-4wwj2"] Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.815492 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.863494 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-4wwj2"] Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.995829 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996050 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996100 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-config\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996132 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996177 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r9n\" (UniqueName: \"kubernetes.io/projected/87fe97c7-f360-4d7b-988f-0779aa692cde-kube-api-access-p7r9n\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996201 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:22 crc kubenswrapper[4736]: I0214 11:05:22.996219 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.097871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.097948 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-config\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.097974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.098021 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7r9n\" (UniqueName: \"kubernetes.io/projected/87fe97c7-f360-4d7b-988f-0779aa692cde-kube-api-access-p7r9n\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.098041 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.098061 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.098127 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.098965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.099478 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.100084 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-config\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.100574 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.101337 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.102498 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fe97c7-f360-4d7b-988f-0779aa692cde-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.118228 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7r9n\" (UniqueName: \"kubernetes.io/projected/87fe97c7-f360-4d7b-988f-0779aa692cde-kube-api-access-p7r9n\") pod \"dnsmasq-dns-6ff66b85ff-4wwj2\" (UID: \"87fe97c7-f360-4d7b-988f-0779aa692cde\") " pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.133407 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.230392 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402098 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402387 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402565 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.402596 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk2ws\" (UniqueName: \"kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws\") pod \"d8c617df-907e-46fa-b6be-b0c62b56afc9\" (UID: \"d8c617df-907e-46fa-b6be-b0c62b56afc9\") " Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.416283 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws" (OuterVolumeSpecName: "kube-api-access-wk2ws") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "kube-api-access-wk2ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.443849 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerID="e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556" exitCode=0 Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.443901 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" event={"ID":"d8c617df-907e-46fa-b6be-b0c62b56afc9","Type":"ContainerDied","Data":"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556"} Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.443928 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" event={"ID":"d8c617df-907e-46fa-b6be-b0c62b56afc9","Type":"ContainerDied","Data":"3e89c25a3cf5ecb54c9e438151ce483e8cc4a4c7c2ef323c18641ad1b5bd71cd"} Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.443943 4736 scope.go:117] "RemoveContainer" containerID="e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.444068 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5ngdn" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.468785 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.474905 4736 scope.go:117] "RemoveContainer" containerID="0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.484927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.501220 4736 scope.go:117] "RemoveContainer" containerID="e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.501573 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: E0214 11:05:23.501687 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556\": container with ID starting with e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556 not found: ID does not exist" containerID="e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.501715 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556"} err="failed to get container status \"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556\": rpc error: code = NotFound desc = could not find container \"e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556\": container with ID starting with e1ce2a888f8f66cb3e118ed29e83a30eff34e9edec09d33d0b7ea67c01a30556 not found: ID does not exist" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.501738 4736 scope.go:117] "RemoveContainer" containerID="0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0" Feb 14 11:05:23 crc kubenswrapper[4736]: E0214 11:05:23.502153 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0\": container with ID starting with 0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0 not found: ID does not exist" containerID="0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.502186 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0"} err="failed to get container status \"0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0\": rpc error: code = NotFound desc = could not find container \"0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0\": container with ID starting with 0c5ac5bbf2a4ddd97b0f36115e7c7bcb601b560adda194b31be33aee757513d0 not found: ID does not exist" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.504094 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.504118 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.504131 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.504140 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk2ws\" (UniqueName: \"kubernetes.io/projected/d8c617df-907e-46fa-b6be-b0c62b56afc9-kube-api-access-wk2ws\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.506219 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config" (OuterVolumeSpecName: "config") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.513929 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8c617df-907e-46fa-b6be-b0c62b56afc9" (UID: "d8c617df-907e-46fa-b6be-b0c62b56afc9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.605315 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.605344 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8c617df-907e-46fa-b6be-b0c62b56afc9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.640987 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-4wwj2"] Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.876632 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:05:23 crc kubenswrapper[4736]: I0214 11:05:23.883059 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5ngdn"] Feb 14 11:05:24 crc kubenswrapper[4736]: I0214 11:05:24.410101 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" path="/var/lib/kubelet/pods/d8c617df-907e-46fa-b6be-b0c62b56afc9/volumes" Feb 14 11:05:24 crc kubenswrapper[4736]: I0214 11:05:24.457027 4736 generic.go:334] "Generic (PLEG): container finished" podID="87fe97c7-f360-4d7b-988f-0779aa692cde" containerID="0bfc10e94071f480c2193842daf0f8dd9527dd2157f95f14bf580a229091ee53" exitCode=0 Feb 14 11:05:24 crc kubenswrapper[4736]: I0214 11:05:24.457090 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" event={"ID":"87fe97c7-f360-4d7b-988f-0779aa692cde","Type":"ContainerDied","Data":"0bfc10e94071f480c2193842daf0f8dd9527dd2157f95f14bf580a229091ee53"} Feb 14 11:05:24 crc kubenswrapper[4736]: I0214 11:05:24.457115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" event={"ID":"87fe97c7-f360-4d7b-988f-0779aa692cde","Type":"ContainerStarted","Data":"e78ccfb99f94fd5d6a2da0b4195d47b0c531933e4e496204d9be510749d57754"} Feb 14 11:05:25 crc kubenswrapper[4736]: I0214 11:05:25.468335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" event={"ID":"87fe97c7-f360-4d7b-988f-0779aa692cde","Type":"ContainerStarted","Data":"f1e404a54dc5bb6aa4b67b02add6675c035260f82c7b2dde14c01bb29940f428"} Feb 14 11:05:25 crc kubenswrapper[4736]: I0214 11:05:25.468644 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:25 crc kubenswrapper[4736]: I0214 11:05:25.492619 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" podStartSLOduration=3.492602034 podStartE2EDuration="3.492602034s" podCreationTimestamp="2026-02-14 11:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:05:25.489525126 +0000 UTC m=+1435.858152524" watchObservedRunningTime="2026-02-14 11:05:25.492602034 +0000 UTC m=+1435.861229402" Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.134877 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-4wwj2" Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.216573 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.216850 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="dnsmasq-dns" containerID="cri-o://04d9879aaae2f9350cd52c5dc068f7b44291225390de4bf7caed3b4403cc9d0c" gracePeriod=10 Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.551352 4736 generic.go:334] "Generic (PLEG): container finished" podID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerID="04d9879aaae2f9350cd52c5dc068f7b44291225390de4bf7caed3b4403cc9d0c" exitCode=0 Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.551439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" event={"ID":"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b","Type":"ContainerDied","Data":"04d9879aaae2f9350cd52c5dc068f7b44291225390de4bf7caed3b4403cc9d0c"} Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.755920 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923109 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcm7\" (UniqueName: \"kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923185 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923232 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923268 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923292 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923424 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.923456 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc\") pod \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\" (UID: \"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b\") " Feb 14 11:05:33 crc kubenswrapper[4736]: I0214 11:05:33.981139 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7" (OuterVolumeSpecName: "kube-api-access-lpcm7") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "kube-api-access-lpcm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.025329 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpcm7\" (UniqueName: \"kubernetes.io/projected/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-kube-api-access-lpcm7\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.092053 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.121305 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.127294 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.127321 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.130864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.131008 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.173167 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.133507 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config" (OuterVolumeSpecName: "config") pod "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" (UID: "7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.228736 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.228781 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.228793 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.228808 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.562097 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" event={"ID":"7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b","Type":"ContainerDied","Data":"30039b08ae57afa3631b440aa13a69f2e065cde3ac753fd22ab8a547bccb96ae"} Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.562148 4736 scope.go:117] "RemoveContainer" containerID="04d9879aaae2f9350cd52c5dc068f7b44291225390de4bf7caed3b4403cc9d0c" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.562273 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-rfgtl" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.587237 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.595817 4736 scope.go:117] "RemoveContainer" containerID="726d9bc12fc0ad3cd2b5e65fe2dc7d724efd722a71936c7790192a73b9077bd5" Feb 14 11:05:34 crc kubenswrapper[4736]: I0214 11:05:34.601293 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-rfgtl"] Feb 14 11:05:36 crc kubenswrapper[4736]: I0214 11:05:36.413176 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" path="/var/lib/kubelet/pods/7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b/volumes" Feb 14 11:05:45 crc kubenswrapper[4736]: I0214 11:05:45.701713 4736 generic.go:334] "Generic (PLEG): container finished" podID="a7b1cfbb-0f84-4915-bae6-0bd165726dba" containerID="790133d90d3ade6091ca340a0389696cc94f6afd00d318966278c491d341661a" exitCode=0 Feb 14 11:05:45 crc kubenswrapper[4736]: I0214 11:05:45.701853 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7b1cfbb-0f84-4915-bae6-0bd165726dba","Type":"ContainerDied","Data":"790133d90d3ade6091ca340a0389696cc94f6afd00d318966278c491d341661a"} Feb 14 11:05:46 crc kubenswrapper[4736]: I0214 11:05:46.713552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7b1cfbb-0f84-4915-bae6-0bd165726dba","Type":"ContainerStarted","Data":"b3a1f7d264e76a6829ed2d74cceb0bbf1877bf22ebf0a456a36689b9f870cc7d"} Feb 14 11:05:46 crc kubenswrapper[4736]: I0214 11:05:46.714322 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 11:05:46 crc kubenswrapper[4736]: I0214 11:05:46.715108 4736 generic.go:334] "Generic (PLEG): container finished" podID="070e414c-ea91-48aa-871d-ebfed740c5b3" containerID="091bebd97cb73f89d571cacd2fa6103fb7408e097e663d1ec8565be0706434c3" exitCode=0 Feb 14 11:05:46 crc kubenswrapper[4736]: I0214 11:05:46.715148 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070e414c-ea91-48aa-871d-ebfed740c5b3","Type":"ContainerDied","Data":"091bebd97cb73f89d571cacd2fa6103fb7408e097e663d1ec8565be0706434c3"} Feb 14 11:05:46 crc kubenswrapper[4736]: I0214 11:05:46.748990 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.748970057 podStartE2EDuration="36.748970057s" podCreationTimestamp="2026-02-14 11:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:05:46.739987248 +0000 UTC m=+1457.108614616" watchObservedRunningTime="2026-02-14 11:05:46.748970057 +0000 UTC m=+1457.117597435" Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.694990 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.695246 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.695286 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.695947 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.695993 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36" gracePeriod=600 Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.731475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070e414c-ea91-48aa-871d-ebfed740c5b3","Type":"ContainerStarted","Data":"22b27fb733d4db72ccaf89440a1eee8af90b5d4d36c79ae43c2c54abae7537f6"} Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.731796 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:05:47 crc kubenswrapper[4736]: I0214 11:05:47.776444 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.776422665 podStartE2EDuration="36.776422665s" podCreationTimestamp="2026-02-14 11:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:05:47.768813606 +0000 UTC m=+1458.137440974" watchObservedRunningTime="2026-02-14 11:05:47.776422665 +0000 UTC m=+1458.145050033" Feb 14 11:05:48 crc kubenswrapper[4736]: I0214 11:05:48.744422 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36" exitCode=0 Feb 14 11:05:48 crc kubenswrapper[4736]: I0214 11:05:48.744514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36"} Feb 14 11:05:48 crc kubenswrapper[4736]: I0214 11:05:48.745211 4736 scope.go:117] "RemoveContainer" containerID="9999be9865e79e704addc20790845881e6f887c75a1494ff7df882251fb72d5a" Feb 14 11:05:48 crc kubenswrapper[4736]: I0214 11:05:48.745094 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e"} Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.218517 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn"] Feb 14 11:05:56 crc kubenswrapper[4736]: E0214 11:05:56.219309 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="init" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219321 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="init" Feb 14 11:05:56 crc kubenswrapper[4736]: E0214 11:05:56.219336 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="init" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219342 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="init" Feb 14 11:05:56 crc kubenswrapper[4736]: E0214 11:05:56.219353 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219360 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: E0214 11:05:56.219368 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219374 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219538 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c617df-907e-46fa-b6be-b0c62b56afc9" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.219559 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee590ab-1eb2-4ec9-ac1c-ff9822a8c74b" containerName="dnsmasq-dns" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.220124 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.224918 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.225009 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.225144 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.225268 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.249467 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn"] Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.328482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2p97\" (UniqueName: \"kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.328769 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.328885 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.329010 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.430896 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.431005 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2p97\" (UniqueName: \"kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.431077 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.431108 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.437045 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.448693 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.449121 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.452016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2p97\" (UniqueName: \"kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:56 crc kubenswrapper[4736]: I0214 11:05:56.536820 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:05:57 crc kubenswrapper[4736]: W0214 11:05:57.529236 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c54768_3b9b_423a_8099_0282ad3ea027.slice/crio-18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07 WatchSource:0}: Error finding container 18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07: Status 404 returned error can't find the container with id 18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07 Feb 14 11:05:57 crc kubenswrapper[4736]: I0214 11:05:57.530681 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn"] Feb 14 11:05:57 crc kubenswrapper[4736]: I0214 11:05:57.831605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" event={"ID":"f0c54768-3b9b-423a-8099-0282ad3ea027","Type":"ContainerStarted","Data":"18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07"} Feb 14 11:06:00 crc kubenswrapper[4736]: I0214 11:06:00.984883 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 11:06:02 crc kubenswrapper[4736]: I0214 11:06:01.999970 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 11:06:11 crc kubenswrapper[4736]: E0214 11:06:11.649403 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr5-latest" Feb 14 11:06:11 crc kubenswrapper[4736]: E0214 11:06:11.650228 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 14 11:06:11 crc kubenswrapper[4736]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr5-latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 14 11:06:11 crc kubenswrapper[4736]: - hosts: all Feb 14 11:06:11 crc kubenswrapper[4736]: strategy: linear Feb 14 11:06:11 crc kubenswrapper[4736]: tasks: Feb 14 11:06:11 crc kubenswrapper[4736]: - name: Enable podified-repos Feb 14 11:06:11 crc kubenswrapper[4736]: become: true Feb 14 11:06:11 crc kubenswrapper[4736]: ansible.builtin.shell: | Feb 14 11:06:11 crc kubenswrapper[4736]: set -euxo pipefail Feb 14 11:06:11 crc kubenswrapper[4736]: pushd /var/tmp Feb 14 11:06:11 crc kubenswrapper[4736]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 14 11:06:11 crc kubenswrapper[4736]: pushd repo-setup-main Feb 14 11:06:11 crc kubenswrapper[4736]: python3 -m venv ./venv Feb 14 11:06:11 crc kubenswrapper[4736]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 14 11:06:11 crc kubenswrapper[4736]: ./venv/bin/repo-setup current-podified -b antelope Feb 14 11:06:11 crc kubenswrapper[4736]: popd Feb 14 11:06:11 crc kubenswrapper[4736]: rm -rf repo-setup-main Feb 14 11:06:11 crc kubenswrapper[4736]: Feb 14 11:06:11 crc kubenswrapper[4736]: Feb 14 11:06:11 crc kubenswrapper[4736]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 14 11:06:11 crc kubenswrapper[4736]: edpm_override_hosts: openstack-edpm-ipam Feb 14 11:06:11 crc kubenswrapper[4736]: edpm_service_type: repo-setup Feb 14 11:06:11 crc kubenswrapper[4736]: Feb 14 11:06:11 crc kubenswrapper[4736]: Feb 14 11:06:11 crc kubenswrapper[4736]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2p97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn_openstack(f0c54768-3b9b-423a-8099-0282ad3ea027): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 14 11:06:11 crc kubenswrapper[4736]: > logger="UnhandledError" Feb 14 11:06:11 crc kubenswrapper[4736]: E0214 11:06:11.651442 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" podUID="f0c54768-3b9b-423a-8099-0282ad3ea027" Feb 14 11:06:11 crc kubenswrapper[4736]: E0214 11:06:11.993356 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr5-latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" podUID="f0c54768-3b9b-423a-8099-0282ad3ea027" Feb 14 11:06:28 crc kubenswrapper[4736]: I0214 11:06:28.181036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" event={"ID":"f0c54768-3b9b-423a-8099-0282ad3ea027","Type":"ContainerStarted","Data":"9bff832710656932f9d869826f6d52539de9cacc17b9c740a2b05b8a69f71cda"} Feb 14 11:06:28 crc kubenswrapper[4736]: I0214 11:06:28.202189 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" podStartSLOduration=2.376266565 podStartE2EDuration="32.202167913s" podCreationTimestamp="2026-02-14 11:05:56 +0000 UTC" firstStartedPulling="2026-02-14 11:05:57.531027841 +0000 UTC m=+1467.899655209" lastFinishedPulling="2026-02-14 11:06:27.356929169 +0000 UTC m=+1497.725556557" observedRunningTime="2026-02-14 11:06:28.198262791 +0000 UTC m=+1498.566890159" watchObservedRunningTime="2026-02-14 11:06:28.202167913 +0000 UTC m=+1498.570795281" Feb 14 11:06:33 crc kubenswrapper[4736]: I0214 11:06:33.097666 4736 scope.go:117] "RemoveContainer" containerID="7d39be7bf2b580adcdd781b4e6826c3f49bf118e9ed055c5f362a24b96855639" Feb 14 11:06:33 crc kubenswrapper[4736]: I0214 11:06:33.122593 4736 scope.go:117] "RemoveContainer" containerID="675ddc5bcfd6a91b90ecee27df0cae06fcbb985022d6aaf7ad814113f68efb37" Feb 14 11:06:33 crc kubenswrapper[4736]: I0214 11:06:33.171684 4736 scope.go:117] "RemoveContainer" containerID="5b6dc2138345fa033d4f01c9bb5922f780761973c57adbefe19b2db3312dca5d" Feb 14 11:06:42 crc kubenswrapper[4736]: I0214 11:06:42.323530 4736 generic.go:334] "Generic (PLEG): container finished" podID="f0c54768-3b9b-423a-8099-0282ad3ea027" containerID="9bff832710656932f9d869826f6d52539de9cacc17b9c740a2b05b8a69f71cda" exitCode=0 Feb 14 11:06:42 crc kubenswrapper[4736]: I0214 11:06:42.323618 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" event={"ID":"f0c54768-3b9b-423a-8099-0282ad3ea027","Type":"ContainerDied","Data":"9bff832710656932f9d869826f6d52539de9cacc17b9c740a2b05b8a69f71cda"} Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.739803 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.748619 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle\") pod \"f0c54768-3b9b-423a-8099-0282ad3ea027\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.749304 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory\") pod \"f0c54768-3b9b-423a-8099-0282ad3ea027\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.749472 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam\") pod \"f0c54768-3b9b-423a-8099-0282ad3ea027\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.749647 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2p97\" (UniqueName: \"kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97\") pod \"f0c54768-3b9b-423a-8099-0282ad3ea027\" (UID: \"f0c54768-3b9b-423a-8099-0282ad3ea027\") " Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.779832 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f0c54768-3b9b-423a-8099-0282ad3ea027" (UID: "f0c54768-3b9b-423a-8099-0282ad3ea027"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.790047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97" (OuterVolumeSpecName: "kube-api-access-x2p97") pod "f0c54768-3b9b-423a-8099-0282ad3ea027" (UID: "f0c54768-3b9b-423a-8099-0282ad3ea027"). InnerVolumeSpecName "kube-api-access-x2p97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.814890 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0c54768-3b9b-423a-8099-0282ad3ea027" (UID: "f0c54768-3b9b-423a-8099-0282ad3ea027"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.851497 4736 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.851524 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.851533 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2p97\" (UniqueName: \"kubernetes.io/projected/f0c54768-3b9b-423a-8099-0282ad3ea027-kube-api-access-x2p97\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.912195 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory" (OuterVolumeSpecName: "inventory") pod "f0c54768-3b9b-423a-8099-0282ad3ea027" (UID: "f0c54768-3b9b-423a-8099-0282ad3ea027"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:06:43 crc kubenswrapper[4736]: I0214 11:06:43.956711 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c54768-3b9b-423a-8099-0282ad3ea027-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.342667 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" event={"ID":"f0c54768-3b9b-423a-8099-0282ad3ea027","Type":"ContainerDied","Data":"18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07"} Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.342708 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f0279a00fb69a2433089441908e7a03aab1082a0e9c762dc2f2a8fa8d33f07" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.342775 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.456782 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr"] Feb 14 11:06:44 crc kubenswrapper[4736]: E0214 11:06:44.457286 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c54768-3b9b-423a-8099-0282ad3ea027" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.457317 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c54768-3b9b-423a-8099-0282ad3ea027" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.457615 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c54768-3b9b-423a-8099-0282ad3ea027" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.458404 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.462851 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.465160 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.468832 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr"] Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.465373 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.465584 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.567723 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.567818 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbz8\" (UniqueName: \"kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.569206 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.672164 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.672219 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbz8\" (UniqueName: \"kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.672305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.677035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.677848 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.694022 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbz8\" (UniqueName: \"kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-75mgr\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:44 crc kubenswrapper[4736]: I0214 11:06:44.780691 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:45 crc kubenswrapper[4736]: I0214 11:06:45.289267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr"] Feb 14 11:06:45 crc kubenswrapper[4736]: I0214 11:06:45.353012 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" event={"ID":"443efa04-503f-4571-b1a3-d31c88bc0a5c","Type":"ContainerStarted","Data":"6b89c29353941c1c3bb18ed7310757fcf87d3526fecb79239d097ed1f84e0e84"} Feb 14 11:06:46 crc kubenswrapper[4736]: I0214 11:06:46.364926 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" event={"ID":"443efa04-503f-4571-b1a3-d31c88bc0a5c","Type":"ContainerStarted","Data":"389484ce1381f4790071ed48a124feee32ff496c554e8cfa1a9314eda66e559d"} Feb 14 11:06:46 crc kubenswrapper[4736]: I0214 11:06:46.387100 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" podStartSLOduration=1.998742192 podStartE2EDuration="2.387079718s" podCreationTimestamp="2026-02-14 11:06:44 +0000 UTC" firstStartedPulling="2026-02-14 11:06:45.293731723 +0000 UTC m=+1515.662359101" lastFinishedPulling="2026-02-14 11:06:45.682069259 +0000 UTC m=+1516.050696627" observedRunningTime="2026-02-14 11:06:46.379244192 +0000 UTC m=+1516.747871580" watchObservedRunningTime="2026-02-14 11:06:46.387079718 +0000 UTC m=+1516.755707096" Feb 14 11:06:49 crc kubenswrapper[4736]: I0214 11:06:49.388953 4736 generic.go:334] "Generic (PLEG): container finished" podID="443efa04-503f-4571-b1a3-d31c88bc0a5c" containerID="389484ce1381f4790071ed48a124feee32ff496c554e8cfa1a9314eda66e559d" exitCode=0 Feb 14 11:06:49 crc kubenswrapper[4736]: I0214 11:06:49.389022 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" event={"ID":"443efa04-503f-4571-b1a3-d31c88bc0a5c","Type":"ContainerDied","Data":"389484ce1381f4790071ed48a124feee32ff496c554e8cfa1a9314eda66e559d"} Feb 14 11:06:50 crc kubenswrapper[4736]: I0214 11:06:50.828609 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:50 crc kubenswrapper[4736]: I0214 11:06:50.986365 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbz8\" (UniqueName: \"kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8\") pod \"443efa04-503f-4571-b1a3-d31c88bc0a5c\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " Feb 14 11:06:50 crc kubenswrapper[4736]: I0214 11:06:50.986633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory\") pod \"443efa04-503f-4571-b1a3-d31c88bc0a5c\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " Feb 14 11:06:50 crc kubenswrapper[4736]: I0214 11:06:50.986684 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") pod \"443efa04-503f-4571-b1a3-d31c88bc0a5c\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " Feb 14 11:06:50 crc kubenswrapper[4736]: I0214 11:06:50.993013 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8" (OuterVolumeSpecName: "kube-api-access-sdbz8") pod "443efa04-503f-4571-b1a3-d31c88bc0a5c" (UID: "443efa04-503f-4571-b1a3-d31c88bc0a5c"). InnerVolumeSpecName "kube-api-access-sdbz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:06:51 crc kubenswrapper[4736]: E0214 11:06:51.039889 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam podName:443efa04-503f-4571-b1a3-d31c88bc0a5c nodeName:}" failed. No retries permitted until 2026-02-14 11:06:51.539857345 +0000 UTC m=+1521.908484713 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam") pod "443efa04-503f-4571-b1a3-d31c88bc0a5c" (UID: "443efa04-503f-4571-b1a3-d31c88bc0a5c") : error deleting /var/lib/kubelet/pods/443efa04-503f-4571-b1a3-d31c88bc0a5c/volume-subpaths: remove /var/lib/kubelet/pods/443efa04-503f-4571-b1a3-d31c88bc0a5c/volume-subpaths: no such file or directory Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.042599 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory" (OuterVolumeSpecName: "inventory") pod "443efa04-503f-4571-b1a3-d31c88bc0a5c" (UID: "443efa04-503f-4571-b1a3-d31c88bc0a5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.089096 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.089143 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbz8\" (UniqueName: \"kubernetes.io/projected/443efa04-503f-4571-b1a3-d31c88bc0a5c-kube-api-access-sdbz8\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.409102 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" event={"ID":"443efa04-503f-4571-b1a3-d31c88bc0a5c","Type":"ContainerDied","Data":"6b89c29353941c1c3bb18ed7310757fcf87d3526fecb79239d097ed1f84e0e84"} Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.409420 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b89c29353941c1c3bb18ed7310757fcf87d3526fecb79239d097ed1f84e0e84" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.409566 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-75mgr" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.475567 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd"] Feb 14 11:06:51 crc kubenswrapper[4736]: E0214 11:06:51.475998 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443efa04-503f-4571-b1a3-d31c88bc0a5c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.476020 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="443efa04-503f-4571-b1a3-d31c88bc0a5c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.476270 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="443efa04-503f-4571-b1a3-d31c88bc0a5c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.477025 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.491850 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd"] Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.500100 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.500195 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.500334 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5968\" (UniqueName: \"kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.500388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.601566 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") pod \"443efa04-503f-4571-b1a3-d31c88bc0a5c\" (UID: \"443efa04-503f-4571-b1a3-d31c88bc0a5c\") " Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.601992 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.602032 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.602112 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5968\" (UniqueName: \"kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.602168 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.605385 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "443efa04-503f-4571-b1a3-d31c88bc0a5c" (UID: "443efa04-503f-4571-b1a3-d31c88bc0a5c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.606821 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.608252 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.610159 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.624626 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5968\" (UniqueName: \"kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.703402 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/443efa04-503f-4571-b1a3-d31c88bc0a5c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:06:51 crc kubenswrapper[4736]: I0214 11:06:51.799512 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:06:52 crc kubenswrapper[4736]: I0214 11:06:52.409230 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd"] Feb 14 11:06:52 crc kubenswrapper[4736]: I0214 11:06:52.425004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" event={"ID":"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a","Type":"ContainerStarted","Data":"b3943803fb2b1a060269a3a011a0ee6645989233e1f6841b9195728f42e3decd"} Feb 14 11:06:53 crc kubenswrapper[4736]: I0214 11:06:53.436335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" event={"ID":"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a","Type":"ContainerStarted","Data":"21579b76ade9eb7ff2ccbf834a32470272dbaf7dfb545a5379af5ffc19d78cad"} Feb 14 11:06:53 crc kubenswrapper[4736]: I0214 11:06:53.458914 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" podStartSLOduration=1.8346744560000001 podStartE2EDuration="2.45889806s" podCreationTimestamp="2026-02-14 11:06:51 +0000 UTC" firstStartedPulling="2026-02-14 11:06:52.418469768 +0000 UTC m=+1522.787097136" lastFinishedPulling="2026-02-14 11:06:53.042693332 +0000 UTC m=+1523.411320740" observedRunningTime="2026-02-14 11:06:53.457113228 +0000 UTC m=+1523.825740606" watchObservedRunningTime="2026-02-14 11:06:53.45889806 +0000 UTC m=+1523.827525418" Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.816934 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.821008 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.854388 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.952394 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.952489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:08 crc kubenswrapper[4736]: I0214 11:07:08.952545 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tpg\" (UniqueName: \"kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.054780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.054892 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tpg\" (UniqueName: \"kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.055131 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.055178 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.055622 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.077870 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tpg\" (UniqueName: \"kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg\") pod \"community-operators-fxhqh\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.158554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:09 crc kubenswrapper[4736]: W0214 11:07:09.855708 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c75d08f_2171_4c1a_ab4d_fccda9b0cbf8.slice/crio-02d980dded1787bf2c11f984ecd9f998472865735ca2290eaf12b39bbf215b4a WatchSource:0}: Error finding container 02d980dded1787bf2c11f984ecd9f998472865735ca2290eaf12b39bbf215b4a: Status 404 returned error can't find the container with id 02d980dded1787bf2c11f984ecd9f998472865735ca2290eaf12b39bbf215b4a Feb 14 11:07:09 crc kubenswrapper[4736]: I0214 11:07:09.858490 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:10 crc kubenswrapper[4736]: I0214 11:07:10.611457 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerID="737bea0b4cab68a6e7261c1c99ee488bf6564eb0c25b711467441a6cac1b64db" exitCode=0 Feb 14 11:07:10 crc kubenswrapper[4736]: I0214 11:07:10.611726 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerDied","Data":"737bea0b4cab68a6e7261c1c99ee488bf6564eb0c25b711467441a6cac1b64db"} Feb 14 11:07:10 crc kubenswrapper[4736]: I0214 11:07:10.611883 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerStarted","Data":"02d980dded1787bf2c11f984ecd9f998472865735ca2290eaf12b39bbf215b4a"} Feb 14 11:07:11 crc kubenswrapper[4736]: I0214 11:07:11.624107 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerStarted","Data":"4884bb5ea0c3fbc8057c6d3fda2a72c88d0e9ac7b8d0246f98e1ccd86f3b3e73"} Feb 14 11:07:13 crc kubenswrapper[4736]: I0214 11:07:13.651474 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerID="4884bb5ea0c3fbc8057c6d3fda2a72c88d0e9ac7b8d0246f98e1ccd86f3b3e73" exitCode=0 Feb 14 11:07:13 crc kubenswrapper[4736]: I0214 11:07:13.651553 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerDied","Data":"4884bb5ea0c3fbc8057c6d3fda2a72c88d0e9ac7b8d0246f98e1ccd86f3b3e73"} Feb 14 11:07:14 crc kubenswrapper[4736]: I0214 11:07:14.664602 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerStarted","Data":"168084ad4069f170f79918463e78f5abafe2b406aa8c8e87c0d8d83bf5b89429"} Feb 14 11:07:14 crc kubenswrapper[4736]: I0214 11:07:14.686509 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fxhqh" podStartSLOduration=3.267848113 podStartE2EDuration="6.686478494s" podCreationTimestamp="2026-02-14 11:07:08 +0000 UTC" firstStartedPulling="2026-02-14 11:07:10.613739889 +0000 UTC m=+1540.982367307" lastFinishedPulling="2026-02-14 11:07:14.03237032 +0000 UTC m=+1544.400997688" observedRunningTime="2026-02-14 11:07:14.683501098 +0000 UTC m=+1545.052128466" watchObservedRunningTime="2026-02-14 11:07:14.686478494 +0000 UTC m=+1545.055105862" Feb 14 11:07:19 crc kubenswrapper[4736]: I0214 11:07:19.159629 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:19 crc kubenswrapper[4736]: I0214 11:07:19.160093 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:19 crc kubenswrapper[4736]: I0214 11:07:19.241688 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:19 crc kubenswrapper[4736]: I0214 11:07:19.773680 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:19 crc kubenswrapper[4736]: I0214 11:07:19.825717 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:21 crc kubenswrapper[4736]: I0214 11:07:21.745168 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fxhqh" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="registry-server" containerID="cri-o://168084ad4069f170f79918463e78f5abafe2b406aa8c8e87c0d8d83bf5b89429" gracePeriod=2 Feb 14 11:07:22 crc kubenswrapper[4736]: I0214 11:07:22.758938 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerID="168084ad4069f170f79918463e78f5abafe2b406aa8c8e87c0d8d83bf5b89429" exitCode=0 Feb 14 11:07:22 crc kubenswrapper[4736]: I0214 11:07:22.758989 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerDied","Data":"168084ad4069f170f79918463e78f5abafe2b406aa8c8e87c0d8d83bf5b89429"} Feb 14 11:07:22 crc kubenswrapper[4736]: I0214 11:07:22.997803 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.020645 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content\") pod \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.021035 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4tpg\" (UniqueName: \"kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg\") pod \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.021106 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities\") pod \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\" (UID: \"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8\") " Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.022424 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities" (OuterVolumeSpecName: "utilities") pod "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" (UID: "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.028917 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg" (OuterVolumeSpecName: "kube-api-access-z4tpg") pod "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" (UID: "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8"). InnerVolumeSpecName "kube-api-access-z4tpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.094562 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" (UID: "2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.122709 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4tpg\" (UniqueName: \"kubernetes.io/projected/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-kube-api-access-z4tpg\") on node \"crc\" DevicePath \"\"" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.122770 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.122784 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.769893 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxhqh" event={"ID":"2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8","Type":"ContainerDied","Data":"02d980dded1787bf2c11f984ecd9f998472865735ca2290eaf12b39bbf215b4a"} Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.769959 4736 scope.go:117] "RemoveContainer" containerID="168084ad4069f170f79918463e78f5abafe2b406aa8c8e87c0d8d83bf5b89429" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.770015 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxhqh" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.806113 4736 scope.go:117] "RemoveContainer" containerID="4884bb5ea0c3fbc8057c6d3fda2a72c88d0e9ac7b8d0246f98e1ccd86f3b3e73" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.818089 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.837788 4736 scope.go:117] "RemoveContainer" containerID="737bea0b4cab68a6e7261c1c99ee488bf6564eb0c25b711467441a6cac1b64db" Feb 14 11:07:23 crc kubenswrapper[4736]: I0214 11:07:23.844314 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fxhqh"] Feb 14 11:07:24 crc kubenswrapper[4736]: I0214 11:07:24.430786 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" path="/var/lib/kubelet/pods/2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8/volumes" Feb 14 11:07:33 crc kubenswrapper[4736]: I0214 11:07:33.506281 4736 scope.go:117] "RemoveContainer" containerID="ecdc3b6214a48cbe77a947fc680f4c11e0ae0f50c6d25ea82d0dc2ad6ff1c87e" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.049293 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:11 crc kubenswrapper[4736]: E0214 11:08:11.051105 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="extract-content" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.051124 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="extract-content" Feb 14 11:08:11 crc kubenswrapper[4736]: E0214 11:08:11.051174 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="extract-utilities" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.051185 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="extract-utilities" Feb 14 11:08:11 crc kubenswrapper[4736]: E0214 11:08:11.051259 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="registry-server" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.051273 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="registry-server" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.051605 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c75d08f-2171-4c1a-ab4d-fccda9b0cbf8" containerName="registry-server" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.054040 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.063707 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.063836 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.063927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95h5p\" (UniqueName: \"kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.064566 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.164945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.165034 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95h5p\" (UniqueName: \"kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.165118 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.165657 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.165786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.198709 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95h5p\" (UniqueName: \"kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p\") pod \"redhat-marketplace-sbfls\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.441168 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:11 crc kubenswrapper[4736]: I0214 11:08:11.920823 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:12 crc kubenswrapper[4736]: I0214 11:08:12.199913 4736 generic.go:334] "Generic (PLEG): container finished" podID="d7653c0d-15b2-462a-9538-446089ac5540" containerID="10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629" exitCode=0 Feb 14 11:08:12 crc kubenswrapper[4736]: I0214 11:08:12.199964 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerDied","Data":"10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629"} Feb 14 11:08:12 crc kubenswrapper[4736]: I0214 11:08:12.200005 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerStarted","Data":"61ad6a9d5c00ff5f60dcd937a8a9378ddcf0e02b4ff6b2f87d161ed2c919ff87"} Feb 14 11:08:17 crc kubenswrapper[4736]: I0214 11:08:17.162112 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerStarted","Data":"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80"} Feb 14 11:08:17 crc kubenswrapper[4736]: I0214 11:08:17.695357 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:08:17 crc kubenswrapper[4736]: I0214 11:08:17.695398 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:08:19 crc kubenswrapper[4736]: I0214 11:08:19.180115 4736 generic.go:334] "Generic (PLEG): container finished" podID="d7653c0d-15b2-462a-9538-446089ac5540" containerID="4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80" exitCode=0 Feb 14 11:08:19 crc kubenswrapper[4736]: I0214 11:08:19.180299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerDied","Data":"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80"} Feb 14 11:08:20 crc kubenswrapper[4736]: I0214 11:08:20.191813 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerStarted","Data":"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32"} Feb 14 11:08:20 crc kubenswrapper[4736]: I0214 11:08:20.218758 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sbfls" podStartSLOduration=1.471533749 podStartE2EDuration="9.21872722s" podCreationTimestamp="2026-02-14 11:08:11 +0000 UTC" firstStartedPulling="2026-02-14 11:08:12.204929504 +0000 UTC m=+1602.573556862" lastFinishedPulling="2026-02-14 11:08:19.952122955 +0000 UTC m=+1610.320750333" observedRunningTime="2026-02-14 11:08:20.211287424 +0000 UTC m=+1610.579914792" watchObservedRunningTime="2026-02-14 11:08:20.21872722 +0000 UTC m=+1610.587354588" Feb 14 11:08:21 crc kubenswrapper[4736]: I0214 11:08:21.441718 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:21 crc kubenswrapper[4736]: I0214 11:08:21.442015 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:22 crc kubenswrapper[4736]: I0214 11:08:22.488172 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sbfls" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="registry-server" probeResult="failure" output=< Feb 14 11:08:22 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:08:22 crc kubenswrapper[4736]: > Feb 14 11:08:31 crc kubenswrapper[4736]: I0214 11:08:31.530581 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:31 crc kubenswrapper[4736]: I0214 11:08:31.617078 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:31 crc kubenswrapper[4736]: I0214 11:08:31.780703 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.346598 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sbfls" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="registry-server" containerID="cri-o://c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32" gracePeriod=2 Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.828025 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.952717 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95h5p\" (UniqueName: \"kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p\") pod \"d7653c0d-15b2-462a-9538-446089ac5540\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.952818 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content\") pod \"d7653c0d-15b2-462a-9538-446089ac5540\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.952956 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities\") pod \"d7653c0d-15b2-462a-9538-446089ac5540\" (UID: \"d7653c0d-15b2-462a-9538-446089ac5540\") " Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.954638 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities" (OuterVolumeSpecName: "utilities") pod "d7653c0d-15b2-462a-9538-446089ac5540" (UID: "d7653c0d-15b2-462a-9538-446089ac5540"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.972520 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p" (OuterVolumeSpecName: "kube-api-access-95h5p") pod "d7653c0d-15b2-462a-9538-446089ac5540" (UID: "d7653c0d-15b2-462a-9538-446089ac5540"). InnerVolumeSpecName "kube-api-access-95h5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:08:33 crc kubenswrapper[4736]: I0214 11:08:33.983579 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7653c0d-15b2-462a-9538-446089ac5540" (UID: "d7653c0d-15b2-462a-9538-446089ac5540"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.056011 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95h5p\" (UniqueName: \"kubernetes.io/projected/d7653c0d-15b2-462a-9538-446089ac5540-kube-api-access-95h5p\") on node \"crc\" DevicePath \"\"" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.056069 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.056084 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7653c0d-15b2-462a-9538-446089ac5540-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.361117 4736 generic.go:334] "Generic (PLEG): container finished" podID="d7653c0d-15b2-462a-9538-446089ac5540" containerID="c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32" exitCode=0 Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.361156 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerDied","Data":"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32"} Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.361205 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbfls" event={"ID":"d7653c0d-15b2-462a-9538-446089ac5540","Type":"ContainerDied","Data":"61ad6a9d5c00ff5f60dcd937a8a9378ddcf0e02b4ff6b2f87d161ed2c919ff87"} Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.361224 4736 scope.go:117] "RemoveContainer" containerID="c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.361222 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbfls" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.395232 4736 scope.go:117] "RemoveContainer" containerID="4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.422167 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.429009 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbfls"] Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.467323 4736 scope.go:117] "RemoveContainer" containerID="10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.491184 4736 scope.go:117] "RemoveContainer" containerID="c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32" Feb 14 11:08:34 crc kubenswrapper[4736]: E0214 11:08:34.491593 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32\": container with ID starting with c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32 not found: ID does not exist" containerID="c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.491622 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32"} err="failed to get container status \"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32\": rpc error: code = NotFound desc = could not find container \"c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32\": container with ID starting with c7a12f8783604ff5e4f811924a1169f8df3f310a282f18af039b01f8dba6fc32 not found: ID does not exist" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.491642 4736 scope.go:117] "RemoveContainer" containerID="4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80" Feb 14 11:08:34 crc kubenswrapper[4736]: E0214 11:08:34.492354 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80\": container with ID starting with 4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80 not found: ID does not exist" containerID="4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.492374 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80"} err="failed to get container status \"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80\": rpc error: code = NotFound desc = could not find container \"4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80\": container with ID starting with 4a03ea4325db7c9452917935d8dc0e1acdd47bb9a10bd670aced2a67fa07ab80 not found: ID does not exist" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.492388 4736 scope.go:117] "RemoveContainer" containerID="10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629" Feb 14 11:08:34 crc kubenswrapper[4736]: E0214 11:08:34.492674 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629\": container with ID starting with 10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629 not found: ID does not exist" containerID="10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629" Feb 14 11:08:34 crc kubenswrapper[4736]: I0214 11:08:34.492694 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629"} err="failed to get container status \"10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629\": rpc error: code = NotFound desc = could not find container \"10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629\": container with ID starting with 10a5b0ec4c4c0daf7dfd9030a1dd87279211d1a9e4b7193dac9432682e46e629 not found: ID does not exist" Feb 14 11:08:36 crc kubenswrapper[4736]: I0214 11:08:36.411620 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7653c0d-15b2-462a-9538-446089ac5540" path="/var/lib/kubelet/pods/d7653c0d-15b2-462a-9538-446089ac5540/volumes" Feb 14 11:08:47 crc kubenswrapper[4736]: I0214 11:08:47.695257 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:08:47 crc kubenswrapper[4736]: I0214 11:08:47.695847 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:09:17 crc kubenswrapper[4736]: I0214 11:09:17.695379 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:09:17 crc kubenswrapper[4736]: I0214 11:09:17.695961 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:09:17 crc kubenswrapper[4736]: I0214 11:09:17.696017 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:09:17 crc kubenswrapper[4736]: I0214 11:09:17.696863 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:09:17 crc kubenswrapper[4736]: I0214 11:09:17.696926 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" gracePeriod=600 Feb 14 11:09:17 crc kubenswrapper[4736]: E0214 11:09:17.844593 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:09:18 crc kubenswrapper[4736]: I0214 11:09:18.842125 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" exitCode=0 Feb 14 11:09:18 crc kubenswrapper[4736]: I0214 11:09:18.842467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e"} Feb 14 11:09:18 crc kubenswrapper[4736]: I0214 11:09:18.842506 4736 scope.go:117] "RemoveContainer" containerID="06df3833e98084abd044f093d850172879dab303a80e13d1c11f831527beea36" Feb 14 11:09:18 crc kubenswrapper[4736]: I0214 11:09:18.843187 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:09:18 crc kubenswrapper[4736]: E0214 11:09:18.843450 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:09:31 crc kubenswrapper[4736]: I0214 11:09:31.397337 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:09:31 crc kubenswrapper[4736]: E0214 11:09:31.397984 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.689120 4736 scope.go:117] "RemoveContainer" containerID="8672497a34a4b38ae9263f4e50394172d823c234d6a4b53c0a4eb6a64ce1a35e" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.736938 4736 scope.go:117] "RemoveContainer" containerID="f2222a5e60019c8de3edda36b27acd096d01dc0658da5ef797621feb11b3ccab" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.758271 4736 scope.go:117] "RemoveContainer" containerID="ede6ec94651d7b96ba34a78a1a504dd567416ba016f4bdf9816ce14ae6433068" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.793949 4736 scope.go:117] "RemoveContainer" containerID="767436af2ec86d8f399b08ed2190713dea404732b8db8c1f2ba8d785b1dbcd26" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.832962 4736 scope.go:117] "RemoveContainer" containerID="7c9e8a124f3ee742ef112462960112fb61e196ea6f0cfaef9edb481c0640db09" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.856468 4736 scope.go:117] "RemoveContainer" containerID="d923b35358a9058a0753455785e9b61ebdb4a8da0fd4074460b80b64ac5f773d" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.883550 4736 scope.go:117] "RemoveContainer" containerID="d4971abe50e27f239465345d6621b860ac1118c84b4c1da8ef728372b0aec94c" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.904112 4736 scope.go:117] "RemoveContainer" containerID="167effffcbdfdcc269e94b8eb8c0d39b371275b976fb7478d60f769184e5c158" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.928058 4736 scope.go:117] "RemoveContainer" containerID="6d6f0fa7f10557fdf58e635daf79ff9ac56ecb1568a2e1ef7528b22d1c357285" Feb 14 11:09:33 crc kubenswrapper[4736]: I0214 11:09:33.961868 4736 scope.go:117] "RemoveContainer" containerID="20f8d36bf25b43976fa083469d769be72cc00ff4cd07fa24822d6327c7894937" Feb 14 11:09:42 crc kubenswrapper[4736]: I0214 11:09:42.397499 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:09:42 crc kubenswrapper[4736]: E0214 11:09:42.398486 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:09:53 crc kubenswrapper[4736]: I0214 11:09:53.396914 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:09:53 crc kubenswrapper[4736]: E0214 11:09:53.397673 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.068861 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1952-account-create-update-sg565"] Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.079965 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-r79mn"] Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.090469 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1952-account-create-update-sg565"] Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.098770 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-r79mn"] Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.410734 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e" path="/var/lib/kubelet/pods/df17de56-a5e4-4bb1-aa8e-c8f16df9fb8e/volumes" Feb 14 11:10:00 crc kubenswrapper[4736]: I0214 11:10:00.412071 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe7622f-339d-408e-a8bc-b83f3fd55653" path="/var/lib/kubelet/pods/efe7622f-339d-408e-a8bc-b83f3fd55653/volumes" Feb 14 11:10:01 crc kubenswrapper[4736]: I0214 11:10:01.049700 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bpqsj"] Feb 14 11:10:01 crc kubenswrapper[4736]: I0214 11:10:01.069029 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-877f-account-create-update-pwlkx"] Feb 14 11:10:01 crc kubenswrapper[4736]: I0214 11:10:01.081364 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bpqsj"] Feb 14 11:10:01 crc kubenswrapper[4736]: I0214 11:10:01.091247 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-877f-account-create-update-pwlkx"] Feb 14 11:10:02 crc kubenswrapper[4736]: I0214 11:10:02.409617 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdae2a62-d876-4c93-b5a3-ae8bcb002f08" path="/var/lib/kubelet/pods/bdae2a62-d876-4c93-b5a3-ae8bcb002f08/volumes" Feb 14 11:10:02 crc kubenswrapper[4736]: I0214 11:10:02.410837 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6866fe7-50cc-40b2-8326-ac36ca31eb25" path="/var/lib/kubelet/pods/d6866fe7-50cc-40b2-8326-ac36ca31eb25/volumes" Feb 14 11:10:04 crc kubenswrapper[4736]: I0214 11:10:04.316378 4736 generic.go:334] "Generic (PLEG): container finished" podID="3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" containerID="21579b76ade9eb7ff2ccbf834a32470272dbaf7dfb545a5379af5ffc19d78cad" exitCode=0 Feb 14 11:10:04 crc kubenswrapper[4736]: I0214 11:10:04.316472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" event={"ID":"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a","Type":"ContainerDied","Data":"21579b76ade9eb7ff2ccbf834a32470272dbaf7dfb545a5379af5ffc19d78cad"} Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.397705 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:10:05 crc kubenswrapper[4736]: E0214 11:10:05.398150 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.772607 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.965447 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5968\" (UniqueName: \"kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968\") pod \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.966642 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory\") pod \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.967019 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle\") pod \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.967384 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam\") pod \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\" (UID: \"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a\") " Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.973047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968" (OuterVolumeSpecName: "kube-api-access-t5968") pod "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" (UID: "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a"). InnerVolumeSpecName "kube-api-access-t5968". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.975472 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" (UID: "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.997843 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory" (OuterVolumeSpecName: "inventory") pod "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" (UID: "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:10:05 crc kubenswrapper[4736]: I0214 11:10:05.999329 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" (UID: "3bc4af51-ea9d-471b-a6d1-6330e3f48a5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.035703 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4n8xn"] Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.076615 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5968\" (UniqueName: \"kubernetes.io/projected/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-kube-api-access-t5968\") on node \"crc\" DevicePath \"\"" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.076650 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.076663 4736 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.076678 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4af51-ea9d-471b-a6d1-6330e3f48a5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.079928 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9a59-account-create-update-hxb95"] Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.088200 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4n8xn"] Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.095954 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9a59-account-create-update-hxb95"] Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.336651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" event={"ID":"3bc4af51-ea9d-471b-a6d1-6330e3f48a5a","Type":"ContainerDied","Data":"b3943803fb2b1a060269a3a011a0ee6645989233e1f6841b9195728f42e3decd"} Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.336991 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3943803fb2b1a060269a3a011a0ee6645989233e1f6841b9195728f42e3decd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.336713 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.408290 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35210bf4-ee1c-4534-ab21-c04b78c3eb1e" path="/var/lib/kubelet/pods/35210bf4-ee1c-4534-ab21-c04b78c3eb1e/volumes" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.409078 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bebc481-1b21-4a52-9d7e-f2683269c0a5" path="/var/lib/kubelet/pods/3bebc481-1b21-4a52-9d7e-f2683269c0a5/volumes" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448271 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd"] Feb 14 11:10:06 crc kubenswrapper[4736]: E0214 11:10:06.448644 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="extract-content" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448661 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="extract-content" Feb 14 11:10:06 crc kubenswrapper[4736]: E0214 11:10:06.448672 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="registry-server" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448678 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="registry-server" Feb 14 11:10:06 crc kubenswrapper[4736]: E0214 11:10:06.448708 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="extract-utilities" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448716 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="extract-utilities" Feb 14 11:10:06 crc kubenswrapper[4736]: E0214 11:10:06.448727 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448733 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448913 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7653c0d-15b2-462a-9538-446089ac5540" containerName="registry-server" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.448926 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc4af51-ea9d-471b-a6d1-6330e3f48a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.450016 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.451691 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.460085 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd"] Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.461887 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.463212 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.463671 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.593978 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6dn\" (UniqueName: \"kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.594159 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.594225 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.696626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.696782 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.696878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6dn\" (UniqueName: \"kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.700997 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.701614 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.718142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6dn\" (UniqueName: \"kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:06 crc kubenswrapper[4736]: I0214 11:10:06.769811 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:10:07 crc kubenswrapper[4736]: I0214 11:10:07.397793 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd"] Feb 14 11:10:07 crc kubenswrapper[4736]: I0214 11:10:07.402396 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:10:08 crc kubenswrapper[4736]: I0214 11:10:08.356468 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" event={"ID":"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c","Type":"ContainerStarted","Data":"da6e548e52ebc63e056c0a875475ba4d466c8e1d18276d29e942d7096ae1d548"} Feb 14 11:10:08 crc kubenswrapper[4736]: I0214 11:10:08.356849 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" event={"ID":"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c","Type":"ContainerStarted","Data":"ba92812316c2149dbc8b3eae63d21d540ae3b10f8f4d6b0eb804f7ff64ed7780"} Feb 14 11:10:18 crc kubenswrapper[4736]: I0214 11:10:18.397885 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:10:18 crc kubenswrapper[4736]: E0214 11:10:18.398852 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:10:31 crc kubenswrapper[4736]: I0214 11:10:31.056558 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" podStartSLOduration=24.593217337 podStartE2EDuration="25.056534478s" podCreationTimestamp="2026-02-14 11:10:06 +0000 UTC" firstStartedPulling="2026-02-14 11:10:07.402166382 +0000 UTC m=+1717.770793750" lastFinishedPulling="2026-02-14 11:10:07.865483493 +0000 UTC m=+1718.234110891" observedRunningTime="2026-02-14 11:10:08.376552628 +0000 UTC m=+1718.745179986" watchObservedRunningTime="2026-02-14 11:10:31.056534478 +0000 UTC m=+1741.425161856" Feb 14 11:10:31 crc kubenswrapper[4736]: I0214 11:10:31.067242 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xhm89"] Feb 14 11:10:31 crc kubenswrapper[4736]: I0214 11:10:31.078844 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xhm89"] Feb 14 11:10:32 crc kubenswrapper[4736]: I0214 11:10:32.412027 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e18e214-64e7-49ee-bd4a-29b91d1ac8eb" path="/var/lib/kubelet/pods/6e18e214-64e7-49ee-bd4a-29b91d1ac8eb/volumes" Feb 14 11:10:33 crc kubenswrapper[4736]: I0214 11:10:33.397534 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:10:33 crc kubenswrapper[4736]: E0214 11:10:33.397949 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.077576 4736 scope.go:117] "RemoveContainer" containerID="854d4f26e50ae210147221370a867d170b3d197d869b79263c5ac8f658533ae8" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.106003 4736 scope.go:117] "RemoveContainer" containerID="dd6f228a2bc49ec01badf7acecb9bd4430b96ceb0f18f2349dafa69cb16c93ef" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.150613 4736 scope.go:117] "RemoveContainer" containerID="0c1a2d141d446feea0c780f97d104f6026543409118704e95df841f8f025fad5" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.197894 4736 scope.go:117] "RemoveContainer" containerID="ccfcde2abb7be534a534ee9e8717234afe40132469e27ec578267eeb3b7c8af9" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.222972 4736 scope.go:117] "RemoveContainer" containerID="5f837766c253c9f6880a425773eb11a84f8e8778ee562d7e0f8199c1049beca5" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.260386 4736 scope.go:117] "RemoveContainer" containerID="3cdb8607c5cbae7e4b06244c2450359d41343e1eebfff587938affac4a32225b" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.300467 4736 scope.go:117] "RemoveContainer" containerID="6b57c021b584071648ceb2516f10a821ce461aec3a2998959873e7c5bd309833" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.322148 4736 scope.go:117] "RemoveContainer" containerID="5e0222a9ba3cc626c5c25924ade6994b2834367ff8e4e98e96c3f7e4492bbd72" Feb 14 11:10:34 crc kubenswrapper[4736]: I0214 11:10:34.350541 4736 scope.go:117] "RemoveContainer" containerID="2e0a3cc8ce29d1b26d9d491e58885badde645f4c279e2f3a48002dfc45a2c9cd" Feb 14 11:10:36 crc kubenswrapper[4736]: I0214 11:10:36.058172 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8f1b-account-create-update-8bd5x"] Feb 14 11:10:36 crc kubenswrapper[4736]: I0214 11:10:36.078420 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8f1b-account-create-update-8bd5x"] Feb 14 11:10:36 crc kubenswrapper[4736]: I0214 11:10:36.428574 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe4563b-fc89-4e16-9fb7-f832fc1cf699" path="/var/lib/kubelet/pods/7fe4563b-fc89-4e16-9fb7-f832fc1cf699/volumes" Feb 14 11:10:39 crc kubenswrapper[4736]: I0214 11:10:39.041967 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d904-account-create-update-7p9zr"] Feb 14 11:10:39 crc kubenswrapper[4736]: I0214 11:10:39.053795 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d904-account-create-update-7p9zr"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.064015 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fq9gh"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.084094 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2108-account-create-update-gqshp"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.097481 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vlfkq"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.106688 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zmqp8"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.114160 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2108-account-create-update-gqshp"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.120853 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fq9gh"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.129481 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zmqp8"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.139591 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vlfkq"] Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.419332 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183769d0-3dde-43ba-995a-16aa55c72ff8" path="/var/lib/kubelet/pods/183769d0-3dde-43ba-995a-16aa55c72ff8/volumes" Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.420668 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3515d4a8-4470-4062-99c4-54388510f693" path="/var/lib/kubelet/pods/3515d4a8-4470-4062-99c4-54388510f693/volumes" Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.422394 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4e563c-8b5a-4405-a623-805bd1da0ef3" path="/var/lib/kubelet/pods/6d4e563c-8b5a-4405-a623-805bd1da0ef3/volumes" Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.423231 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac413b61-b5c8-44d6-9968-b2a2e166ae25" path="/var/lib/kubelet/pods/ac413b61-b5c8-44d6-9968-b2a2e166ae25/volumes" Feb 14 11:10:40 crc kubenswrapper[4736]: I0214 11:10:40.424661 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af05acd6-857a-4997-a369-54921d3db536" path="/var/lib/kubelet/pods/af05acd6-857a-4997-a369-54921d3db536/volumes" Feb 14 11:10:45 crc kubenswrapper[4736]: I0214 11:10:45.397640 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:10:45 crc kubenswrapper[4736]: E0214 11:10:45.398502 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:10:50 crc kubenswrapper[4736]: I0214 11:10:50.066851 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wm596"] Feb 14 11:10:50 crc kubenswrapper[4736]: I0214 11:10:50.093955 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wm596"] Feb 14 11:10:50 crc kubenswrapper[4736]: I0214 11:10:50.418099 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d" path="/var/lib/kubelet/pods/efc461aa-2c17-46cb-ab1a-c0fcc4e8cb4d/volumes" Feb 14 11:11:00 crc kubenswrapper[4736]: I0214 11:11:00.403120 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:11:00 crc kubenswrapper[4736]: E0214 11:11:00.405299 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:11:08 crc kubenswrapper[4736]: I0214 11:11:08.050023 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kbq8d"] Feb 14 11:11:08 crc kubenswrapper[4736]: I0214 11:11:08.063035 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kbq8d"] Feb 14 11:11:08 crc kubenswrapper[4736]: I0214 11:11:08.413197 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1" path="/var/lib/kubelet/pods/7755c5ab-4aba-4e82-a6f7-e6d63ca8efe1/volumes" Feb 14 11:11:14 crc kubenswrapper[4736]: I0214 11:11:14.398422 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:11:14 crc kubenswrapper[4736]: E0214 11:11:14.399700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:11:21 crc kubenswrapper[4736]: I0214 11:11:21.069430 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2kwz6"] Feb 14 11:11:21 crc kubenswrapper[4736]: I0214 11:11:21.083402 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2kwz6"] Feb 14 11:11:22 crc kubenswrapper[4736]: I0214 11:11:22.418037 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c" path="/var/lib/kubelet/pods/abfe9443-ba9d-42a1-8a8e-d71a2ce9f25c/volumes" Feb 14 11:11:29 crc kubenswrapper[4736]: I0214 11:11:29.398087 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:11:29 crc kubenswrapper[4736]: E0214 11:11:29.399210 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:11:33 crc kubenswrapper[4736]: I0214 11:11:33.040761 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tb7hg"] Feb 14 11:11:33 crc kubenswrapper[4736]: I0214 11:11:33.049899 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tb7hg"] Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.412644 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3474d549-a236-46a6-ad9a-46186dca5831" path="/var/lib/kubelet/pods/3474d549-a236-46a6-ad9a-46186dca5831/volumes" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.536467 4736 scope.go:117] "RemoveContainer" containerID="bf22edb45d3f4dff6997e349b2503d46fb44beb540012ac4219ad4d8df2ebccd" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.577148 4736 scope.go:117] "RemoveContainer" containerID="228a6235fc2953bef50d4ad1258655baa52ad3c04ee47f87742f41bd15c5ef5f" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.641197 4736 scope.go:117] "RemoveContainer" containerID="6e26534a72b30431578facb514b6fa11fbc9fff7e86ae4a6671c56e10a76e1e1" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.683335 4736 scope.go:117] "RemoveContainer" containerID="1fed47927ab41ff09643f034e4c000b2d05e2486c1e4c6d694c29187755e54c9" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.738109 4736 scope.go:117] "RemoveContainer" containerID="41744a3fa865f70465ea016a3af02102a06751f863407c3cb340f9ddd2d757af" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.769838 4736 scope.go:117] "RemoveContainer" containerID="f3b4377da7dd5855e4eff16ad5c07880f6f10d48d1fe9b2819209d15a27858e7" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.801898 4736 scope.go:117] "RemoveContainer" containerID="c68572b4f95e3350f80c134eacf2c8bad1e8c242e1941b227b8aae42e6db8d8d" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.829132 4736 scope.go:117] "RemoveContainer" containerID="bfd7e43aa7874187cacc4cc946c8313c56433ba8ee5357eb660526803a99698c" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.861123 4736 scope.go:117] "RemoveContainer" containerID="5400047ea1aa4428f622735a7a13601b46e835600b5b9c51877d2f2d49b7d805" Feb 14 11:11:34 crc kubenswrapper[4736]: I0214 11:11:34.881335 4736 scope.go:117] "RemoveContainer" containerID="d7005beb39ace8e2da9d9690c4fa2a56b09fdabeee964c54ab9a7b4481ab0e0c" Feb 14 11:11:40 crc kubenswrapper[4736]: I0214 11:11:40.404292 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:11:40 crc kubenswrapper[4736]: E0214 11:11:40.406404 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:11:41 crc kubenswrapper[4736]: I0214 11:11:41.305645 4736 generic.go:334] "Generic (PLEG): container finished" podID="fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" containerID="da6e548e52ebc63e056c0a875475ba4d466c8e1d18276d29e942d7096ae1d548" exitCode=0 Feb 14 11:11:41 crc kubenswrapper[4736]: I0214 11:11:41.306054 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" event={"ID":"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c","Type":"ContainerDied","Data":"da6e548e52ebc63e056c0a875475ba4d466c8e1d18276d29e942d7096ae1d548"} Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.733201 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.850141 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory\") pod \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.850296 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6dn\" (UniqueName: \"kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn\") pod \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.850364 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam\") pod \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\" (UID: \"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c\") " Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.858499 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn" (OuterVolumeSpecName: "kube-api-access-ql6dn") pod "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" (UID: "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c"). InnerVolumeSpecName "kube-api-access-ql6dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.878064 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory" (OuterVolumeSpecName: "inventory") pod "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" (UID: "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.878810 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" (UID: "fdcfdd0a-6f5a-44be-862f-2329a1f0a60c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.953006 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.953048 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:11:42 crc kubenswrapper[4736]: I0214 11:11:42.953058 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql6dn\" (UniqueName: \"kubernetes.io/projected/fdcfdd0a-6f5a-44be-862f-2329a1f0a60c-kube-api-access-ql6dn\") on node \"crc\" DevicePath \"\"" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.321798 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" event={"ID":"fdcfdd0a-6f5a-44be-862f-2329a1f0a60c","Type":"ContainerDied","Data":"ba92812316c2149dbc8b3eae63d21d540ae3b10f8f4d6b0eb804f7ff64ed7780"} Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.322123 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba92812316c2149dbc8b3eae63d21d540ae3b10f8f4d6b0eb804f7ff64ed7780" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.322195 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.401024 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp"] Feb 14 11:11:43 crc kubenswrapper[4736]: E0214 11:11:43.401432 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.401458 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.401704 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcfdd0a-6f5a-44be-862f-2329a1f0a60c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.402917 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.405447 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.407698 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.407897 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.408099 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.447163 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp"] Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.461710 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.462643 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.463172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlp5r\" (UniqueName: \"kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.564940 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.565267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.565430 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlp5r\" (UniqueName: \"kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.569726 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.569791 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.581062 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlp5r\" (UniqueName: \"kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:43 crc kubenswrapper[4736]: I0214 11:11:43.759272 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.037009 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-z89bc"] Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.049669 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4ksm2"] Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.059007 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-z89bc"] Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.065662 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4ksm2"] Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.302360 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp"] Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.342644 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" event={"ID":"3b5bed78-9221-4954-b969-9a676c00a110","Type":"ContainerStarted","Data":"445b3e9cafab76e504d036841e8c4f61e74b2465fabb18d268d5eb0b0c4d1db6"} Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.410207 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df559ea6-6169-48d5-a47c-f765681b9a1e" path="/var/lib/kubelet/pods/df559ea6-6169-48d5-a47c-f765681b9a1e/volumes" Feb 14 11:11:44 crc kubenswrapper[4736]: I0214 11:11:44.411326 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f62557-0339-4cd9-884b-a3fdbc564ed0" path="/var/lib/kubelet/pods/f8f62557-0339-4cd9-884b-a3fdbc564ed0/volumes" Feb 14 11:11:45 crc kubenswrapper[4736]: I0214 11:11:45.352097 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" event={"ID":"3b5bed78-9221-4954-b969-9a676c00a110","Type":"ContainerStarted","Data":"955de72c3d572db271285f3232ae6220aa2a5522b8d33662cb0aa1d5469be6e5"} Feb 14 11:11:45 crc kubenswrapper[4736]: I0214 11:11:45.374377 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" podStartSLOduration=1.932375373 podStartE2EDuration="2.374356944s" podCreationTimestamp="2026-02-14 11:11:43 +0000 UTC" firstStartedPulling="2026-02-14 11:11:44.29945704 +0000 UTC m=+1814.668084418" lastFinishedPulling="2026-02-14 11:11:44.741438621 +0000 UTC m=+1815.110065989" observedRunningTime="2026-02-14 11:11:45.369891688 +0000 UTC m=+1815.738519076" watchObservedRunningTime="2026-02-14 11:11:45.374356944 +0000 UTC m=+1815.742984312" Feb 14 11:11:47 crc kubenswrapper[4736]: I0214 11:11:47.028524 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9bdr9"] Feb 14 11:11:47 crc kubenswrapper[4736]: I0214 11:11:47.049595 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9bdr9"] Feb 14 11:11:48 crc kubenswrapper[4736]: I0214 11:11:48.414646 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d43521c3-8892-4a34-af06-1d93a8f50c38" path="/var/lib/kubelet/pods/d43521c3-8892-4a34-af06-1d93a8f50c38/volumes" Feb 14 11:11:54 crc kubenswrapper[4736]: I0214 11:11:54.398099 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:11:54 crc kubenswrapper[4736]: E0214 11:11:54.399534 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:12:09 crc kubenswrapper[4736]: I0214 11:12:09.398166 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:12:09 crc kubenswrapper[4736]: E0214 11:12:09.399066 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:12:20 crc kubenswrapper[4736]: I0214 11:12:20.411249 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:12:20 crc kubenswrapper[4736]: E0214 11:12:20.412139 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:12:32 crc kubenswrapper[4736]: I0214 11:12:32.398241 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:12:32 crc kubenswrapper[4736]: E0214 11:12:32.399229 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:12:35 crc kubenswrapper[4736]: I0214 11:12:35.053440 4736 scope.go:117] "RemoveContainer" containerID="9c4d94499295ca775b95c97766ec949b48b19959dd46670da5fa8d1f9152bb44" Feb 14 11:12:35 crc kubenswrapper[4736]: I0214 11:12:35.091421 4736 scope.go:117] "RemoveContainer" containerID="953a8f6acf6c555b2a3d91b7a06ac13470b5c5e20e72749e48b36b5c8486ef35" Feb 14 11:12:35 crc kubenswrapper[4736]: I0214 11:12:35.163649 4736 scope.go:117] "RemoveContainer" containerID="7c3ac7afc2de52097134e8b1842711fd77186d0d2fc2ec237c1207476458278f" Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.053572 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ef43-account-create-update-q9clt"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.063057 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-drfv5"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.073383 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ef43-account-create-update-q9clt"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.080895 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kfzf9"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.088026 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4dv5t"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.095207 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-drfv5"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.102002 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kfzf9"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.107912 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4dv5t"] Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.411264 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9d691f-bb0e-42be-b3f9-9cfd979d4de7" path="/var/lib/kubelet/pods/4a9d691f-bb0e-42be-b3f9-9cfd979d4de7/volumes" Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.412382 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6391df09-ebfc-4e13-85b4-5aab4c8eefb0" path="/var/lib/kubelet/pods/6391df09-ebfc-4e13-85b4-5aab4c8eefb0/volumes" Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.413556 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd379fd3-2737-47ba-9f0f-59b46e24fed6" path="/var/lib/kubelet/pods/dd379fd3-2737-47ba-9f0f-59b46e24fed6/volumes" Feb 14 11:12:38 crc kubenswrapper[4736]: I0214 11:12:38.414653 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cd324a-4e67-4132-ad44-0991435b9291" path="/var/lib/kubelet/pods/e4cd324a-4e67-4132-ad44-0991435b9291/volumes" Feb 14 11:12:39 crc kubenswrapper[4736]: I0214 11:12:39.032922 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0fc8-account-create-update-drkqr"] Feb 14 11:12:39 crc kubenswrapper[4736]: I0214 11:12:39.045635 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3b5f-account-create-update-xq5sl"] Feb 14 11:12:39 crc kubenswrapper[4736]: I0214 11:12:39.054290 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0fc8-account-create-update-drkqr"] Feb 14 11:12:39 crc kubenswrapper[4736]: I0214 11:12:39.062073 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3b5f-account-create-update-xq5sl"] Feb 14 11:12:40 crc kubenswrapper[4736]: I0214 11:12:40.409813 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7910764c-65b4-4645-9d60-c25cbea434d5" path="/var/lib/kubelet/pods/7910764c-65b4-4645-9d60-c25cbea434d5/volumes" Feb 14 11:12:40 crc kubenswrapper[4736]: I0214 11:12:40.410550 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1294507-4b94-4f46-91b3-7a3dffdd7494" path="/var/lib/kubelet/pods/a1294507-4b94-4f46-91b3-7a3dffdd7494/volumes" Feb 14 11:12:45 crc kubenswrapper[4736]: I0214 11:12:45.397631 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:12:45 crc kubenswrapper[4736]: E0214 11:12:45.398285 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:12:58 crc kubenswrapper[4736]: I0214 11:12:58.065569 4736 generic.go:334] "Generic (PLEG): container finished" podID="3b5bed78-9221-4954-b969-9a676c00a110" containerID="955de72c3d572db271285f3232ae6220aa2a5522b8d33662cb0aa1d5469be6e5" exitCode=0 Feb 14 11:12:58 crc kubenswrapper[4736]: I0214 11:12:58.065669 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" event={"ID":"3b5bed78-9221-4954-b969-9a676c00a110","Type":"ContainerDied","Data":"955de72c3d572db271285f3232ae6220aa2a5522b8d33662cb0aa1d5469be6e5"} Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.785457 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.920807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory\") pod \"3b5bed78-9221-4954-b969-9a676c00a110\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.920980 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam\") pod \"3b5bed78-9221-4954-b969-9a676c00a110\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.921005 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlp5r\" (UniqueName: \"kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r\") pod \"3b5bed78-9221-4954-b969-9a676c00a110\" (UID: \"3b5bed78-9221-4954-b969-9a676c00a110\") " Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.927807 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r" (OuterVolumeSpecName: "kube-api-access-vlp5r") pod "3b5bed78-9221-4954-b969-9a676c00a110" (UID: "3b5bed78-9221-4954-b969-9a676c00a110"). InnerVolumeSpecName "kube-api-access-vlp5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.948201 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory" (OuterVolumeSpecName: "inventory") pod "3b5bed78-9221-4954-b969-9a676c00a110" (UID: "3b5bed78-9221-4954-b969-9a676c00a110"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:12:59 crc kubenswrapper[4736]: I0214 11:12:59.948992 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b5bed78-9221-4954-b969-9a676c00a110" (UID: "3b5bed78-9221-4954-b969-9a676c00a110"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.023552 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.023984 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b5bed78-9221-4954-b969-9a676c00a110-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.024108 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlp5r\" (UniqueName: \"kubernetes.io/projected/3b5bed78-9221-4954-b969-9a676c00a110-kube-api-access-vlp5r\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.089977 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" event={"ID":"3b5bed78-9221-4954-b969-9a676c00a110","Type":"ContainerDied","Data":"445b3e9cafab76e504d036841e8c4f61e74b2465fabb18d268d5eb0b0c4d1db6"} Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.090020 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445b3e9cafab76e504d036841e8c4f61e74b2465fabb18d268d5eb0b0c4d1db6" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.090081 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.203227 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph"] Feb 14 11:13:00 crc kubenswrapper[4736]: E0214 11:13:00.206888 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5bed78-9221-4954-b969-9a676c00a110" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.207026 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5bed78-9221-4954-b969-9a676c00a110" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.207361 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5bed78-9221-4954-b969-9a676c00a110" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.208236 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.233109 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.233404 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.233606 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.255960 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.267536 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph"] Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.341502 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.341964 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqs8t\" (UniqueName: \"kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.342271 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.404910 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:13:00 crc kubenswrapper[4736]: E0214 11:13:00.405171 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.444494 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.444596 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqs8t\" (UniqueName: \"kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.444657 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.449169 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.456347 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.467512 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqs8t\" (UniqueName: \"kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pflph\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:00 crc kubenswrapper[4736]: I0214 11:13:00.533277 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:01 crc kubenswrapper[4736]: I0214 11:13:01.122052 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph"] Feb 14 11:13:02 crc kubenswrapper[4736]: I0214 11:13:02.106527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" event={"ID":"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19","Type":"ContainerStarted","Data":"cbdb866166a160e2559693423377e7ae4f7fe7b68da6a448c5c2a27cdd75b67c"} Feb 14 11:13:02 crc kubenswrapper[4736]: I0214 11:13:02.106840 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" event={"ID":"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19","Type":"ContainerStarted","Data":"57600478effba0bef6ca10f7576f68ca10e14217259fc8d886553c8158b73e66"} Feb 14 11:13:02 crc kubenswrapper[4736]: I0214 11:13:02.130052 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" podStartSLOduration=1.693578426 podStartE2EDuration="2.130034583s" podCreationTimestamp="2026-02-14 11:13:00 +0000 UTC" firstStartedPulling="2026-02-14 11:13:01.130436239 +0000 UTC m=+1891.499063617" lastFinishedPulling="2026-02-14 11:13:01.566892406 +0000 UTC m=+1891.935519774" observedRunningTime="2026-02-14 11:13:02.121599785 +0000 UTC m=+1892.490227153" watchObservedRunningTime="2026-02-14 11:13:02.130034583 +0000 UTC m=+1892.498661951" Feb 14 11:13:07 crc kubenswrapper[4736]: I0214 11:13:07.158721 4736 generic.go:334] "Generic (PLEG): container finished" podID="e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" containerID="cbdb866166a160e2559693423377e7ae4f7fe7b68da6a448c5c2a27cdd75b67c" exitCode=0 Feb 14 11:13:07 crc kubenswrapper[4736]: I0214 11:13:07.159516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" event={"ID":"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19","Type":"ContainerDied","Data":"cbdb866166a160e2559693423377e7ae4f7fe7b68da6a448c5c2a27cdd75b67c"} Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.551205 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.616721 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam\") pod \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.617174 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory\") pod \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.617368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqs8t\" (UniqueName: \"kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t\") pod \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\" (UID: \"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19\") " Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.647844 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t" (OuterVolumeSpecName: "kube-api-access-wqs8t") pod "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" (UID: "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19"). InnerVolumeSpecName "kube-api-access-wqs8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.683940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory" (OuterVolumeSpecName: "inventory") pod "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" (UID: "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.688935 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" (UID: "e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.756325 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.756373 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:08 crc kubenswrapper[4736]: I0214 11:13:08.756391 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqs8t\" (UniqueName: \"kubernetes.io/projected/e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19-kube-api-access-wqs8t\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.181856 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" event={"ID":"e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19","Type":"ContainerDied","Data":"57600478effba0bef6ca10f7576f68ca10e14217259fc8d886553c8158b73e66"} Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.182142 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57600478effba0bef6ca10f7576f68ca10e14217259fc8d886553c8158b73e66" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.182237 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pflph" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.267104 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7"] Feb 14 11:13:09 crc kubenswrapper[4736]: E0214 11:13:09.267442 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.267459 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.267637 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.268204 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.275903 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.276086 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.276250 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.276768 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.285607 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7"] Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.370805 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.370864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.370996 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phnt\" (UniqueName: \"kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.472276 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.472325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.472430 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phnt\" (UniqueName: \"kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.480512 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.486883 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.510854 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phnt\" (UniqueName: \"kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vxrr7\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.587236 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:09 crc kubenswrapper[4736]: I0214 11:13:09.956411 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7"] Feb 14 11:13:10 crc kubenswrapper[4736]: I0214 11:13:10.192283 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" event={"ID":"a0ab8569-328c-4ffb-89c5-d08ae95e5016","Type":"ContainerStarted","Data":"1d0f96f841af6dbf4231fc3fed605ee112e4aae4e386f713d3aa0cec360060ca"} Feb 14 11:13:11 crc kubenswrapper[4736]: I0214 11:13:11.203641 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" event={"ID":"a0ab8569-328c-4ffb-89c5-d08ae95e5016","Type":"ContainerStarted","Data":"7586c771a56e2483297307da6e2b45543c72791225cae1ad688d74775bef3b1d"} Feb 14 11:13:11 crc kubenswrapper[4736]: I0214 11:13:11.233435 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" podStartSLOduration=1.832318777 podStartE2EDuration="2.233420148s" podCreationTimestamp="2026-02-14 11:13:09 +0000 UTC" firstStartedPulling="2026-02-14 11:13:09.968316094 +0000 UTC m=+1900.336943462" lastFinishedPulling="2026-02-14 11:13:10.369417455 +0000 UTC m=+1900.738044833" observedRunningTime="2026-02-14 11:13:11.226883584 +0000 UTC m=+1901.595510942" watchObservedRunningTime="2026-02-14 11:13:11.233420148 +0000 UTC m=+1901.602047516" Feb 14 11:13:13 crc kubenswrapper[4736]: I0214 11:13:13.045175 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7m7bx"] Feb 14 11:13:13 crc kubenswrapper[4736]: I0214 11:13:13.068569 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7m7bx"] Feb 14 11:13:14 crc kubenswrapper[4736]: I0214 11:13:14.411435 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5abf3335-1f39-43c3-96e4-dd6f9a17c937" path="/var/lib/kubelet/pods/5abf3335-1f39-43c3-96e4-dd6f9a17c937/volumes" Feb 14 11:13:15 crc kubenswrapper[4736]: I0214 11:13:15.398241 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:13:15 crc kubenswrapper[4736]: E0214 11:13:15.398991 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:13:30 crc kubenswrapper[4736]: I0214 11:13:30.404513 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:13:30 crc kubenswrapper[4736]: E0214 11:13:30.405613 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.315931 4736 scope.go:117] "RemoveContainer" containerID="1fa65ef887f7ac125daafaa07a682f6f00f69593f65153a2c311b1a55630ad30" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.341346 4736 scope.go:117] "RemoveContainer" containerID="19d9890c4585ddfa9a0e11a2550a75a73e9a3f63a6d78dd11b2f306ac3bc8196" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.404600 4736 scope.go:117] "RemoveContainer" containerID="03c399c40c53903582932d8742b556ba9cccaddc6083390f35702fdf86bedd85" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.431112 4736 scope.go:117] "RemoveContainer" containerID="ecd623212845ee1bcd7da3c0bd24305a983bf9cd64e3156c8d9e396ccd596c17" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.468953 4736 scope.go:117] "RemoveContainer" containerID="6d47a22858d8121410f4f25fa8c5b7e4f86cd7a59a421eb7e72f0a1a608bcb76" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.507426 4736 scope.go:117] "RemoveContainer" containerID="476068fc5c3ba6780fd2e84fd1e5a53d75ccf3ae1d538b96c0040cb86e005803" Feb 14 11:13:35 crc kubenswrapper[4736]: I0214 11:13:35.553110 4736 scope.go:117] "RemoveContainer" containerID="37b4277df1c2d40318be63f7d4ca6bf3a77664767f75aaadb125e96c49d6d047" Feb 14 11:13:37 crc kubenswrapper[4736]: I0214 11:13:37.060498 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lgxl6"] Feb 14 11:13:37 crc kubenswrapper[4736]: I0214 11:13:37.075325 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lgxl6"] Feb 14 11:13:38 crc kubenswrapper[4736]: I0214 11:13:38.412097 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5" path="/var/lib/kubelet/pods/bd92bfae-f0bc-42d3-9d9c-ac2b2d3395e5/volumes" Feb 14 11:13:42 crc kubenswrapper[4736]: I0214 11:13:42.035439 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nsn99"] Feb 14 11:13:42 crc kubenswrapper[4736]: I0214 11:13:42.047727 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nsn99"] Feb 14 11:13:42 crc kubenswrapper[4736]: I0214 11:13:42.419175 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b151935-66d0-44a9-b6bb-4760eb23e60f" path="/var/lib/kubelet/pods/5b151935-66d0-44a9-b6bb-4760eb23e60f/volumes" Feb 14 11:13:45 crc kubenswrapper[4736]: I0214 11:13:45.397509 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:13:45 crc kubenswrapper[4736]: E0214 11:13:45.398213 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:13:51 crc kubenswrapper[4736]: I0214 11:13:51.579464 4736 generic.go:334] "Generic (PLEG): container finished" podID="a0ab8569-328c-4ffb-89c5-d08ae95e5016" containerID="7586c771a56e2483297307da6e2b45543c72791225cae1ad688d74775bef3b1d" exitCode=0 Feb 14 11:13:51 crc kubenswrapper[4736]: I0214 11:13:51.579585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" event={"ID":"a0ab8569-328c-4ffb-89c5-d08ae95e5016","Type":"ContainerDied","Data":"7586c771a56e2483297307da6e2b45543c72791225cae1ad688d74775bef3b1d"} Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.096932 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.162577 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory\") pod \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.162703 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam\") pod \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.162805 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6phnt\" (UniqueName: \"kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt\") pod \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\" (UID: \"a0ab8569-328c-4ffb-89c5-d08ae95e5016\") " Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.177868 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt" (OuterVolumeSpecName: "kube-api-access-6phnt") pod "a0ab8569-328c-4ffb-89c5-d08ae95e5016" (UID: "a0ab8569-328c-4ffb-89c5-d08ae95e5016"). InnerVolumeSpecName "kube-api-access-6phnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.189713 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory" (OuterVolumeSpecName: "inventory") pod "a0ab8569-328c-4ffb-89c5-d08ae95e5016" (UID: "a0ab8569-328c-4ffb-89c5-d08ae95e5016"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.194592 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0ab8569-328c-4ffb-89c5-d08ae95e5016" (UID: "a0ab8569-328c-4ffb-89c5-d08ae95e5016"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.264450 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.264490 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0ab8569-328c-4ffb-89c5-d08ae95e5016-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.264505 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6phnt\" (UniqueName: \"kubernetes.io/projected/a0ab8569-328c-4ffb-89c5-d08ae95e5016-kube-api-access-6phnt\") on node \"crc\" DevicePath \"\"" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.607328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" event={"ID":"a0ab8569-328c-4ffb-89c5-d08ae95e5016","Type":"ContainerDied","Data":"1d0f96f841af6dbf4231fc3fed605ee112e4aae4e386f713d3aa0cec360060ca"} Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.607398 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0f96f841af6dbf4231fc3fed605ee112e4aae4e386f713d3aa0cec360060ca" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.607499 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vxrr7" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.720468 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp"] Feb 14 11:13:53 crc kubenswrapper[4736]: E0214 11:13:53.720963 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab8569-328c-4ffb-89c5-d08ae95e5016" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.720987 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab8569-328c-4ffb-89c5-d08ae95e5016" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.721262 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab8569-328c-4ffb-89c5-d08ae95e5016" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.722051 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.725605 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.725662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.726436 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.726720 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.753020 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp"] Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.781012 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.781061 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfpz\" (UniqueName: \"kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.781198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.883353 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.883496 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.883534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfpz\" (UniqueName: \"kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.887158 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.907130 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:53 crc kubenswrapper[4736]: I0214 11:13:53.907504 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfpz\" (UniqueName: \"kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m97sp\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:54 crc kubenswrapper[4736]: I0214 11:13:54.046672 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:13:54 crc kubenswrapper[4736]: I0214 11:13:54.608115 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp"] Feb 14 11:13:54 crc kubenswrapper[4736]: I0214 11:13:54.618548 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" event={"ID":"38ba8e01-1e02-4937-aeac-badb36edee69","Type":"ContainerStarted","Data":"c4c10c5a87ef8d7d2d481df262c037e400a75755bc200c1c4b3f5e853c1bd00d"} Feb 14 11:13:55 crc kubenswrapper[4736]: I0214 11:13:55.641508 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" event={"ID":"38ba8e01-1e02-4937-aeac-badb36edee69","Type":"ContainerStarted","Data":"3547ea2921e787806a049b7b7b8b06d979a530ffdfc1e491accd26e5047db784"} Feb 14 11:13:55 crc kubenswrapper[4736]: I0214 11:13:55.677278 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" podStartSLOduration=2.240340324 podStartE2EDuration="2.677260859s" podCreationTimestamp="2026-02-14 11:13:53 +0000 UTC" firstStartedPulling="2026-02-14 11:13:54.599552938 +0000 UTC m=+1944.968180296" lastFinishedPulling="2026-02-14 11:13:55.036473423 +0000 UTC m=+1945.405100831" observedRunningTime="2026-02-14 11:13:55.676415386 +0000 UTC m=+1946.045042764" watchObservedRunningTime="2026-02-14 11:13:55.677260859 +0000 UTC m=+1946.045888247" Feb 14 11:13:58 crc kubenswrapper[4736]: I0214 11:13:58.397562 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:13:58 crc kubenswrapper[4736]: E0214 11:13:58.398592 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:14:11 crc kubenswrapper[4736]: I0214 11:14:11.396857 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:14:11 crc kubenswrapper[4736]: E0214 11:14:11.397684 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:14:23 crc kubenswrapper[4736]: I0214 11:14:23.049892 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wb8dk"] Feb 14 11:14:23 crc kubenswrapper[4736]: I0214 11:14:23.065336 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wb8dk"] Feb 14 11:14:24 crc kubenswrapper[4736]: I0214 11:14:24.412046 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8f51b8-16c7-4d32-9595-199616101d23" path="/var/lib/kubelet/pods/ec8f51b8-16c7-4d32-9595-199616101d23/volumes" Feb 14 11:14:25 crc kubenswrapper[4736]: I0214 11:14:25.398319 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:14:25 crc kubenswrapper[4736]: I0214 11:14:25.951820 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a"} Feb 14 11:14:35 crc kubenswrapper[4736]: I0214 11:14:35.695555 4736 scope.go:117] "RemoveContainer" containerID="0764dedc1916aca33f037a0e407984fed429279d56b12e232cce076b77ae6bf4" Feb 14 11:14:35 crc kubenswrapper[4736]: I0214 11:14:35.786444 4736 scope.go:117] "RemoveContainer" containerID="7238a41ae74c3f788c852e4c07f0c8a0125cf92871fc5b195a73eec2a5e3b45d" Feb 14 11:14:35 crc kubenswrapper[4736]: I0214 11:14:35.833897 4736 scope.go:117] "RemoveContainer" containerID="baff73b7359a735febd8471b1d7cdd096cdc1d642d1e6de5aac154ac28b30abf" Feb 14 11:14:50 crc kubenswrapper[4736]: I0214 11:14:50.189996 4736 generic.go:334] "Generic (PLEG): container finished" podID="38ba8e01-1e02-4937-aeac-badb36edee69" containerID="3547ea2921e787806a049b7b7b8b06d979a530ffdfc1e491accd26e5047db784" exitCode=0 Feb 14 11:14:50 crc kubenswrapper[4736]: I0214 11:14:50.190211 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" event={"ID":"38ba8e01-1e02-4937-aeac-badb36edee69","Type":"ContainerDied","Data":"3547ea2921e787806a049b7b7b8b06d979a530ffdfc1e491accd26e5047db784"} Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.707055 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.840075 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfpz\" (UniqueName: \"kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz\") pod \"38ba8e01-1e02-4937-aeac-badb36edee69\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.840271 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam\") pod \"38ba8e01-1e02-4937-aeac-badb36edee69\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.840376 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory\") pod \"38ba8e01-1e02-4937-aeac-badb36edee69\" (UID: \"38ba8e01-1e02-4937-aeac-badb36edee69\") " Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.848790 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz" (OuterVolumeSpecName: "kube-api-access-qrfpz") pod "38ba8e01-1e02-4937-aeac-badb36edee69" (UID: "38ba8e01-1e02-4937-aeac-badb36edee69"). InnerVolumeSpecName "kube-api-access-qrfpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.882194 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory" (OuterVolumeSpecName: "inventory") pod "38ba8e01-1e02-4937-aeac-badb36edee69" (UID: "38ba8e01-1e02-4937-aeac-badb36edee69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.882454 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38ba8e01-1e02-4937-aeac-badb36edee69" (UID: "38ba8e01-1e02-4937-aeac-badb36edee69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.942639 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfpz\" (UniqueName: \"kubernetes.io/projected/38ba8e01-1e02-4937-aeac-badb36edee69-kube-api-access-qrfpz\") on node \"crc\" DevicePath \"\"" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.942685 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:14:51 crc kubenswrapper[4736]: I0214 11:14:51.942707 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ba8e01-1e02-4937-aeac-badb36edee69-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.228699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" event={"ID":"38ba8e01-1e02-4937-aeac-badb36edee69","Type":"ContainerDied","Data":"c4c10c5a87ef8d7d2d481df262c037e400a75755bc200c1c4b3f5e853c1bd00d"} Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.228774 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c10c5a87ef8d7d2d481df262c037e400a75755bc200c1c4b3f5e853c1bd00d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.228836 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m97sp" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.318907 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kdd4d"] Feb 14 11:14:52 crc kubenswrapper[4736]: E0214 11:14:52.319334 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ba8e01-1e02-4937-aeac-badb36edee69" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.319355 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ba8e01-1e02-4937-aeac-badb36edee69" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.319567 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ba8e01-1e02-4937-aeac-badb36edee69" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.320317 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.322118 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.323341 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.323727 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.323842 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.335585 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kdd4d"] Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.461924 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.461976 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.462018 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmjb\" (UniqueName: \"kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.564259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.565161 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.565350 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lmjb\" (UniqueName: \"kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.570459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.576937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.600683 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lmjb\" (UniqueName: \"kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb\") pod \"ssh-known-hosts-edpm-deployment-kdd4d\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:52 crc kubenswrapper[4736]: I0214 11:14:52.639037 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:14:53 crc kubenswrapper[4736]: I0214 11:14:53.187957 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kdd4d"] Feb 14 11:14:53 crc kubenswrapper[4736]: I0214 11:14:53.237971 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" event={"ID":"a8f1507b-722e-46b0-a239-48a5100e9971","Type":"ContainerStarted","Data":"6dbeacac91cf3c4e453b80b71fac0b0bef37564975e6925a57a6e552a9761f9a"} Feb 14 11:14:54 crc kubenswrapper[4736]: I0214 11:14:54.248710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" event={"ID":"a8f1507b-722e-46b0-a239-48a5100e9971","Type":"ContainerStarted","Data":"4a2cf126aa0d98e4fad47f24c12ee0300cdad290646adc10668dc4e27376e56b"} Feb 14 11:14:54 crc kubenswrapper[4736]: I0214 11:14:54.270622 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" podStartSLOduration=1.719336927 podStartE2EDuration="2.270603229s" podCreationTimestamp="2026-02-14 11:14:52 +0000 UTC" firstStartedPulling="2026-02-14 11:14:53.198062681 +0000 UTC m=+2003.566690049" lastFinishedPulling="2026-02-14 11:14:53.749328983 +0000 UTC m=+2004.117956351" observedRunningTime="2026-02-14 11:14:54.266618407 +0000 UTC m=+2004.635245775" watchObservedRunningTime="2026-02-14 11:14:54.270603229 +0000 UTC m=+2004.639230607" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.163781 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7"] Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.165546 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.170673 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.171196 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.193211 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7"] Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.332469 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrqw\" (UniqueName: \"kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.332507 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.332567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.434101 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrqw\" (UniqueName: \"kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.434137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.434196 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.435255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.459612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrqw\" (UniqueName: \"kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.460168 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume\") pod \"collect-profiles-29517795-xcdb7\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.497295 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:00 crc kubenswrapper[4736]: I0214 11:15:00.981372 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7"] Feb 14 11:15:01 crc kubenswrapper[4736]: I0214 11:15:01.310473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" event={"ID":"5c5c0616-3abb-4607-804e-f3c634217dcb","Type":"ContainerStarted","Data":"771a55b7c312e88831a05f414a481ffadc0277a83cd3d2a5d66c0ba01d377ecd"} Feb 14 11:15:01 crc kubenswrapper[4736]: I0214 11:15:01.310859 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" event={"ID":"5c5c0616-3abb-4607-804e-f3c634217dcb","Type":"ContainerStarted","Data":"f2cf08f2df734e5e2f6a35fd9014367fb129681dcf314dfd0d36e5412f657665"} Feb 14 11:15:01 crc kubenswrapper[4736]: I0214 11:15:01.313567 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8f1507b-722e-46b0-a239-48a5100e9971" containerID="4a2cf126aa0d98e4fad47f24c12ee0300cdad290646adc10668dc4e27376e56b" exitCode=0 Feb 14 11:15:01 crc kubenswrapper[4736]: I0214 11:15:01.313633 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" event={"ID":"a8f1507b-722e-46b0-a239-48a5100e9971","Type":"ContainerDied","Data":"4a2cf126aa0d98e4fad47f24c12ee0300cdad290646adc10668dc4e27376e56b"} Feb 14 11:15:01 crc kubenswrapper[4736]: I0214 11:15:01.334654 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" podStartSLOduration=1.334625812 podStartE2EDuration="1.334625812s" podCreationTimestamp="2026-02-14 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:15:01.33205092 +0000 UTC m=+2011.700678298" watchObservedRunningTime="2026-02-14 11:15:01.334625812 +0000 UTC m=+2011.703253220" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.326875 4736 generic.go:334] "Generic (PLEG): container finished" podID="5c5c0616-3abb-4607-804e-f3c634217dcb" containerID="771a55b7c312e88831a05f414a481ffadc0277a83cd3d2a5d66c0ba01d377ecd" exitCode=0 Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.326969 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" event={"ID":"5c5c0616-3abb-4607-804e-f3c634217dcb","Type":"ContainerDied","Data":"771a55b7c312e88831a05f414a481ffadc0277a83cd3d2a5d66c0ba01d377ecd"} Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.725574 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.891040 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0\") pod \"a8f1507b-722e-46b0-a239-48a5100e9971\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.891085 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lmjb\" (UniqueName: \"kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb\") pod \"a8f1507b-722e-46b0-a239-48a5100e9971\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.891169 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam\") pod \"a8f1507b-722e-46b0-a239-48a5100e9971\" (UID: \"a8f1507b-722e-46b0-a239-48a5100e9971\") " Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.895829 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb" (OuterVolumeSpecName: "kube-api-access-9lmjb") pod "a8f1507b-722e-46b0-a239-48a5100e9971" (UID: "a8f1507b-722e-46b0-a239-48a5100e9971"). InnerVolumeSpecName "kube-api-access-9lmjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.914990 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8f1507b-722e-46b0-a239-48a5100e9971" (UID: "a8f1507b-722e-46b0-a239-48a5100e9971"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.924772 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a8f1507b-722e-46b0-a239-48a5100e9971" (UID: "a8f1507b-722e-46b0-a239-48a5100e9971"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.995459 4736 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.995526 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lmjb\" (UniqueName: \"kubernetes.io/projected/a8f1507b-722e-46b0-a239-48a5100e9971-kube-api-access-9lmjb\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:02 crc kubenswrapper[4736]: I0214 11:15:02.995556 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f1507b-722e-46b0-a239-48a5100e9971-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.338140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" event={"ID":"a8f1507b-722e-46b0-a239-48a5100e9971","Type":"ContainerDied","Data":"6dbeacac91cf3c4e453b80b71fac0b0bef37564975e6925a57a6e552a9761f9a"} Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.338209 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dbeacac91cf3c4e453b80b71fac0b0bef37564975e6925a57a6e552a9761f9a" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.338173 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kdd4d" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.479239 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4"] Feb 14 11:15:03 crc kubenswrapper[4736]: E0214 11:15:03.486356 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1507b-722e-46b0-a239-48a5100e9971" containerName="ssh-known-hosts-edpm-deployment" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.486430 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1507b-722e-46b0-a239-48a5100e9971" containerName="ssh-known-hosts-edpm-deployment" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.486659 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f1507b-722e-46b0-a239-48a5100e9971" containerName="ssh-known-hosts-edpm-deployment" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.487324 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.492679 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.493163 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.498525 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.498607 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.500478 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4"] Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.617122 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.617359 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.617472 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v2v7\" (UniqueName: \"kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.719677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.720021 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.720090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v2v7\" (UniqueName: \"kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.726587 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.726587 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.740082 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v2v7\" (UniqueName: \"kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9xcr4\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.817134 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.817978 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.922462 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrqw\" (UniqueName: \"kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw\") pod \"5c5c0616-3abb-4607-804e-f3c634217dcb\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.922830 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume\") pod \"5c5c0616-3abb-4607-804e-f3c634217dcb\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.922861 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume\") pod \"5c5c0616-3abb-4607-804e-f3c634217dcb\" (UID: \"5c5c0616-3abb-4607-804e-f3c634217dcb\") " Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.924318 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c5c0616-3abb-4607-804e-f3c634217dcb" (UID: "5c5c0616-3abb-4607-804e-f3c634217dcb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.959422 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw" (OuterVolumeSpecName: "kube-api-access-xgrqw") pod "5c5c0616-3abb-4607-804e-f3c634217dcb" (UID: "5c5c0616-3abb-4607-804e-f3c634217dcb"). InnerVolumeSpecName "kube-api-access-xgrqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:15:03 crc kubenswrapper[4736]: I0214 11:15:03.976457 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c5c0616-3abb-4607-804e-f3c634217dcb" (UID: "5c5c0616-3abb-4607-804e-f3c634217dcb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.025686 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgrqw\" (UniqueName: \"kubernetes.io/projected/5c5c0616-3abb-4607-804e-f3c634217dcb-kube-api-access-xgrqw\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.025732 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5c0616-3abb-4607-804e-f3c634217dcb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.025826 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c5c0616-3abb-4607-804e-f3c634217dcb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.351160 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" event={"ID":"5c5c0616-3abb-4607-804e-f3c634217dcb","Type":"ContainerDied","Data":"f2cf08f2df734e5e2f6a35fd9014367fb129681dcf314dfd0d36e5412f657665"} Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.351222 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2cf08f2df734e5e2f6a35fd9014367fb129681dcf314dfd0d36e5412f657665" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.351302 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7" Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.439231 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6"] Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.446496 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517750-rkwd6"] Feb 14 11:15:04 crc kubenswrapper[4736]: I0214 11:15:04.466840 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4"] Feb 14 11:15:04 crc kubenswrapper[4736]: W0214 11:15:04.476880 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4795d395_5dcc_4284_b6ee_607b2c9a1f97.slice/crio-ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd WatchSource:0}: Error finding container ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd: Status 404 returned error can't find the container with id ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd Feb 14 11:15:05 crc kubenswrapper[4736]: I0214 11:15:05.365198 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" event={"ID":"4795d395-5dcc-4284-b6ee-607b2c9a1f97","Type":"ContainerStarted","Data":"ee75805b291930bf239f953d2c6c849cd65a828af2d7322bad4abcf5ece09142"} Feb 14 11:15:05 crc kubenswrapper[4736]: I0214 11:15:05.365540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" event={"ID":"4795d395-5dcc-4284-b6ee-607b2c9a1f97","Type":"ContainerStarted","Data":"ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd"} Feb 14 11:15:05 crc kubenswrapper[4736]: I0214 11:15:05.394129 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" podStartSLOduration=1.948972707 podStartE2EDuration="2.394105501s" podCreationTimestamp="2026-02-14 11:15:03 +0000 UTC" firstStartedPulling="2026-02-14 11:15:04.479392583 +0000 UTC m=+2014.848019951" lastFinishedPulling="2026-02-14 11:15:04.924525347 +0000 UTC m=+2015.293152745" observedRunningTime="2026-02-14 11:15:05.389467182 +0000 UTC m=+2015.758094590" watchObservedRunningTime="2026-02-14 11:15:05.394105501 +0000 UTC m=+2015.762732899" Feb 14 11:15:06 crc kubenswrapper[4736]: I0214 11:15:06.411342 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a942552-de44-4c27-8779-4cf239de59a3" path="/var/lib/kubelet/pods/1a942552-de44-4c27-8779-4cf239de59a3/volumes" Feb 14 11:15:13 crc kubenswrapper[4736]: I0214 11:15:13.442108 4736 generic.go:334] "Generic (PLEG): container finished" podID="4795d395-5dcc-4284-b6ee-607b2c9a1f97" containerID="ee75805b291930bf239f953d2c6c849cd65a828af2d7322bad4abcf5ece09142" exitCode=0 Feb 14 11:15:13 crc kubenswrapper[4736]: I0214 11:15:13.442193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" event={"ID":"4795d395-5dcc-4284-b6ee-607b2c9a1f97","Type":"ContainerDied","Data":"ee75805b291930bf239f953d2c6c849cd65a828af2d7322bad4abcf5ece09142"} Feb 14 11:15:14 crc kubenswrapper[4736]: I0214 11:15:14.890087 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.080202 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam\") pod \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.080265 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory\") pod \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.080357 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v2v7\" (UniqueName: \"kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7\") pod \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\" (UID: \"4795d395-5dcc-4284-b6ee-607b2c9a1f97\") " Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.085496 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7" (OuterVolumeSpecName: "kube-api-access-9v2v7") pod "4795d395-5dcc-4284-b6ee-607b2c9a1f97" (UID: "4795d395-5dcc-4284-b6ee-607b2c9a1f97"). InnerVolumeSpecName "kube-api-access-9v2v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.112473 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory" (OuterVolumeSpecName: "inventory") pod "4795d395-5dcc-4284-b6ee-607b2c9a1f97" (UID: "4795d395-5dcc-4284-b6ee-607b2c9a1f97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.119858 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4795d395-5dcc-4284-b6ee-607b2c9a1f97" (UID: "4795d395-5dcc-4284-b6ee-607b2c9a1f97"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.182316 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.182487 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4795d395-5dcc-4284-b6ee-607b2c9a1f97-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.182585 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v2v7\" (UniqueName: \"kubernetes.io/projected/4795d395-5dcc-4284-b6ee-607b2c9a1f97-kube-api-access-9v2v7\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.473879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" event={"ID":"4795d395-5dcc-4284-b6ee-607b2c9a1f97","Type":"ContainerDied","Data":"ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd"} Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.473960 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae12f13accabf5a6f2f87d097d725f81ab22d1055271f155bada59ae520eaffd" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.474060 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9xcr4" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.569965 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh"] Feb 14 11:15:15 crc kubenswrapper[4736]: E0214 11:15:15.570824 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4795d395-5dcc-4284-b6ee-607b2c9a1f97" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.570965 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4795d395-5dcc-4284-b6ee-607b2c9a1f97" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:15 crc kubenswrapper[4736]: E0214 11:15:15.571136 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5c0616-3abb-4607-804e-f3c634217dcb" containerName="collect-profiles" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.571261 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5c0616-3abb-4607-804e-f3c634217dcb" containerName="collect-profiles" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.571696 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5c0616-3abb-4607-804e-f3c634217dcb" containerName="collect-profiles" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.571847 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4795d395-5dcc-4284-b6ee-607b2c9a1f97" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.572908 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.575431 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.575909 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.576117 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.576301 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.582684 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh"] Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.594891 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.595184 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.595272 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zcj\" (UniqueName: \"kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.696427 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.696498 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zcj\" (UniqueName: \"kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.696547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.701083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.702933 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.724240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zcj\" (UniqueName: \"kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:15 crc kubenswrapper[4736]: I0214 11:15:15.898423 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:16 crc kubenswrapper[4736]: I0214 11:15:16.448893 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh"] Feb 14 11:15:16 crc kubenswrapper[4736]: I0214 11:15:16.458525 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:15:16 crc kubenswrapper[4736]: I0214 11:15:16.483042 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" event={"ID":"9cdcee64-bc6c-40ca-8db3-50335948db44","Type":"ContainerStarted","Data":"3021bda593e836676a19737a0ba251dcc3a3b46da82295c92069461cd483212e"} Feb 14 11:15:17 crc kubenswrapper[4736]: I0214 11:15:17.492106 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" event={"ID":"9cdcee64-bc6c-40ca-8db3-50335948db44","Type":"ContainerStarted","Data":"2888b61ab68d0c446bff43a7d8fc0cd04dc9da3fb5ce8b45f17b7fa33fc52b2b"} Feb 14 11:15:17 crc kubenswrapper[4736]: I0214 11:15:17.509357 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" podStartSLOduration=2.06725122 podStartE2EDuration="2.509340849s" podCreationTimestamp="2026-02-14 11:15:15 +0000 UTC" firstStartedPulling="2026-02-14 11:15:16.458335892 +0000 UTC m=+2026.826963260" lastFinishedPulling="2026-02-14 11:15:16.900425521 +0000 UTC m=+2027.269052889" observedRunningTime="2026-02-14 11:15:17.506017567 +0000 UTC m=+2027.874644955" watchObservedRunningTime="2026-02-14 11:15:17.509340849 +0000 UTC m=+2027.877968217" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.294809 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.298044 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.319044 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.319196 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd4r\" (UniqueName: \"kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.319285 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.319915 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.420773 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzd4r\" (UniqueName: \"kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.420847 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.420973 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.422980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.424328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.443697 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzd4r\" (UniqueName: \"kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r\") pod \"redhat-operators-pm7pg\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:21 crc kubenswrapper[4736]: I0214 11:15:21.624224 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:22 crc kubenswrapper[4736]: I0214 11:15:22.120312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:15:22 crc kubenswrapper[4736]: I0214 11:15:22.532555 4736 generic.go:334] "Generic (PLEG): container finished" podID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerID="d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99" exitCode=0 Feb 14 11:15:22 crc kubenswrapper[4736]: I0214 11:15:22.532754 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerDied","Data":"d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99"} Feb 14 11:15:22 crc kubenswrapper[4736]: I0214 11:15:22.532778 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerStarted","Data":"fd5fcec9b73fb366eb73a39f129ceaba986ce5fcfd710fa3880d41527296f0a2"} Feb 14 11:15:23 crc kubenswrapper[4736]: I0214 11:15:23.545115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerStarted","Data":"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7"} Feb 14 11:15:26 crc kubenswrapper[4736]: I0214 11:15:26.570908 4736 generic.go:334] "Generic (PLEG): container finished" podID="9cdcee64-bc6c-40ca-8db3-50335948db44" containerID="2888b61ab68d0c446bff43a7d8fc0cd04dc9da3fb5ce8b45f17b7fa33fc52b2b" exitCode=0 Feb 14 11:15:26 crc kubenswrapper[4736]: I0214 11:15:26.570991 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" event={"ID":"9cdcee64-bc6c-40ca-8db3-50335948db44","Type":"ContainerDied","Data":"2888b61ab68d0c446bff43a7d8fc0cd04dc9da3fb5ce8b45f17b7fa33fc52b2b"} Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.072077 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.263234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam\") pod \"9cdcee64-bc6c-40ca-8db3-50335948db44\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.263780 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5zcj\" (UniqueName: \"kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj\") pod \"9cdcee64-bc6c-40ca-8db3-50335948db44\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.263895 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory\") pod \"9cdcee64-bc6c-40ca-8db3-50335948db44\" (UID: \"9cdcee64-bc6c-40ca-8db3-50335948db44\") " Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.270265 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj" (OuterVolumeSpecName: "kube-api-access-f5zcj") pod "9cdcee64-bc6c-40ca-8db3-50335948db44" (UID: "9cdcee64-bc6c-40ca-8db3-50335948db44"). InnerVolumeSpecName "kube-api-access-f5zcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.293538 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9cdcee64-bc6c-40ca-8db3-50335948db44" (UID: "9cdcee64-bc6c-40ca-8db3-50335948db44"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.295693 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory" (OuterVolumeSpecName: "inventory") pod "9cdcee64-bc6c-40ca-8db3-50335948db44" (UID: "9cdcee64-bc6c-40ca-8db3-50335948db44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.366019 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.366067 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5zcj\" (UniqueName: \"kubernetes.io/projected/9cdcee64-bc6c-40ca-8db3-50335948db44-kube-api-access-f5zcj\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.366079 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cdcee64-bc6c-40ca-8db3-50335948db44-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.594988 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" event={"ID":"9cdcee64-bc6c-40ca-8db3-50335948db44","Type":"ContainerDied","Data":"3021bda593e836676a19737a0ba251dcc3a3b46da82295c92069461cd483212e"} Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.595037 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3021bda593e836676a19737a0ba251dcc3a3b46da82295c92069461cd483212e" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.595946 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.603360 4736 generic.go:334] "Generic (PLEG): container finished" podID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerID="c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7" exitCode=0 Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.603649 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerDied","Data":"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7"} Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.700262 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd"] Feb 14 11:15:28 crc kubenswrapper[4736]: E0214 11:15:28.703356 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdcee64-bc6c-40ca-8db3-50335948db44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.703392 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdcee64-bc6c-40ca-8db3-50335948db44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.703963 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdcee64-bc6c-40ca-8db3-50335948db44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.704702 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.709630 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.709816 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.709654 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.710044 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.710329 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.710504 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.714943 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd"] Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.718847 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.719406 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877025 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877323 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877347 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877395 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877469 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877602 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877827 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.877998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.878024 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.878824 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.878936 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.879050 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz88f\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.879227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.981168 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.981434 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.981554 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.981679 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.981884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.982065 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.982365 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.983349 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989361 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989406 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989435 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989465 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz88f\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989491 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989658 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.989799 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.987284 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.986605 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.986623 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.988107 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.988102 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.993900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.994117 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.994548 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.994958 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.996809 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:28 crc kubenswrapper[4736]: I0214 11:15:28.997348 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.000231 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.007634 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz88f\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.034610 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.618518 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerStarted","Data":"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1"} Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.645599 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pm7pg" podStartSLOduration=2.1279383960000002 podStartE2EDuration="8.645565734s" podCreationTimestamp="2026-02-14 11:15:21 +0000 UTC" firstStartedPulling="2026-02-14 11:15:22.534772747 +0000 UTC m=+2032.903400115" lastFinishedPulling="2026-02-14 11:15:29.052400085 +0000 UTC m=+2039.421027453" observedRunningTime="2026-02-14 11:15:29.643410014 +0000 UTC m=+2040.012037432" watchObservedRunningTime="2026-02-14 11:15:29.645565734 +0000 UTC m=+2040.014193182" Feb 14 11:15:29 crc kubenswrapper[4736]: W0214 11:15:29.691042 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928b193b_069f_4f4b_80a6_13c347302fcf.slice/crio-6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294 WatchSource:0}: Error finding container 6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294: Status 404 returned error can't find the container with id 6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294 Feb 14 11:15:29 crc kubenswrapper[4736]: I0214 11:15:29.709326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd"] Feb 14 11:15:30 crc kubenswrapper[4736]: I0214 11:15:30.630470 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" event={"ID":"928b193b-069f-4f4b-80a6-13c347302fcf","Type":"ContainerStarted","Data":"e85c00f9d4601495e5d78535a07e9769d797a1acd807554ae20cf6857576f6a2"} Feb 14 11:15:30 crc kubenswrapper[4736]: I0214 11:15:30.630947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" event={"ID":"928b193b-069f-4f4b-80a6-13c347302fcf","Type":"ContainerStarted","Data":"6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294"} Feb 14 11:15:30 crc kubenswrapper[4736]: I0214 11:15:30.662708 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" podStartSLOduration=2.26054246 podStartE2EDuration="2.662678706s" podCreationTimestamp="2026-02-14 11:15:28 +0000 UTC" firstStartedPulling="2026-02-14 11:15:29.693923571 +0000 UTC m=+2040.062550939" lastFinishedPulling="2026-02-14 11:15:30.096059817 +0000 UTC m=+2040.464687185" observedRunningTime="2026-02-14 11:15:30.652552273 +0000 UTC m=+2041.021179671" watchObservedRunningTime="2026-02-14 11:15:30.662678706 +0000 UTC m=+2041.031306124" Feb 14 11:15:31 crc kubenswrapper[4736]: I0214 11:15:31.625256 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:31 crc kubenswrapper[4736]: I0214 11:15:31.628445 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:15:32 crc kubenswrapper[4736]: I0214 11:15:32.736850 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm7pg" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" probeResult="failure" output=< Feb 14 11:15:32 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:15:32 crc kubenswrapper[4736]: > Feb 14 11:15:35 crc kubenswrapper[4736]: I0214 11:15:35.953919 4736 scope.go:117] "RemoveContainer" containerID="edd7a7993cf58d5f124b07e456b139d7364ff190fc4771825ec2d0f566119cca" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.574844 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.576992 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.596208 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.639073 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dscqt\" (UniqueName: \"kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.639198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.639231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.741118 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dscqt\" (UniqueName: \"kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.742090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.742145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.744196 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.744613 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.767199 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dscqt\" (UniqueName: \"kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt\") pod \"certified-operators-p7lww\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:40 crc kubenswrapper[4736]: I0214 11:15:40.893385 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:41 crc kubenswrapper[4736]: I0214 11:15:41.502058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:15:41 crc kubenswrapper[4736]: I0214 11:15:41.883805 4736 generic.go:334] "Generic (PLEG): container finished" podID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerID="e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c" exitCode=0 Feb 14 11:15:41 crc kubenswrapper[4736]: I0214 11:15:41.883916 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerDied","Data":"e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c"} Feb 14 11:15:41 crc kubenswrapper[4736]: I0214 11:15:41.884152 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerStarted","Data":"d95e93b6bea900b605d9ed898a1784d954f2130aed91584c8c9bb74923cb8bb5"} Feb 14 11:15:42 crc kubenswrapper[4736]: I0214 11:15:42.674860 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm7pg" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" probeResult="failure" output=< Feb 14 11:15:42 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:15:42 crc kubenswrapper[4736]: > Feb 14 11:15:42 crc kubenswrapper[4736]: I0214 11:15:42.893608 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerStarted","Data":"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23"} Feb 14 11:15:44 crc kubenswrapper[4736]: I0214 11:15:44.913176 4736 generic.go:334] "Generic (PLEG): container finished" podID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerID="34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23" exitCode=0 Feb 14 11:15:44 crc kubenswrapper[4736]: I0214 11:15:44.913192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerDied","Data":"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23"} Feb 14 11:15:45 crc kubenswrapper[4736]: I0214 11:15:45.924538 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerStarted","Data":"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a"} Feb 14 11:15:45 crc kubenswrapper[4736]: I0214 11:15:45.951020 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7lww" podStartSLOduration=2.525136421 podStartE2EDuration="5.950999375s" podCreationTimestamp="2026-02-14 11:15:40 +0000 UTC" firstStartedPulling="2026-02-14 11:15:41.885354822 +0000 UTC m=+2052.253982200" lastFinishedPulling="2026-02-14 11:15:45.311217786 +0000 UTC m=+2055.679845154" observedRunningTime="2026-02-14 11:15:45.942661652 +0000 UTC m=+2056.311289020" watchObservedRunningTime="2026-02-14 11:15:45.950999375 +0000 UTC m=+2056.319626763" Feb 14 11:15:50 crc kubenswrapper[4736]: I0214 11:15:50.895102 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:50 crc kubenswrapper[4736]: I0214 11:15:50.895694 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:15:51 crc kubenswrapper[4736]: I0214 11:15:51.938118 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p7lww" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="registry-server" probeResult="failure" output=< Feb 14 11:15:51 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:15:51 crc kubenswrapper[4736]: > Feb 14 11:15:52 crc kubenswrapper[4736]: I0214 11:15:52.689456 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm7pg" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" probeResult="failure" output=< Feb 14 11:15:52 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:15:52 crc kubenswrapper[4736]: > Feb 14 11:16:00 crc kubenswrapper[4736]: I0214 11:16:00.952447 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:16:01 crc kubenswrapper[4736]: I0214 11:16:01.022569 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:16:01 crc kubenswrapper[4736]: I0214 11:16:01.196592 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:16:01 crc kubenswrapper[4736]: I0214 11:16:01.686423 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:16:01 crc kubenswrapper[4736]: I0214 11:16:01.757123 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.045316 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p7lww" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="registry-server" containerID="cri-o://af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a" gracePeriod=2 Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.484048 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.610434 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dscqt\" (UniqueName: \"kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt\") pod \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.610841 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content\") pod \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.611077 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities\") pod \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\" (UID: \"b27317d7-bfc4-447d-aa03-bfe659ef0cf8\") " Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.611936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities" (OuterVolumeSpecName: "utilities") pod "b27317d7-bfc4-447d-aa03-bfe659ef0cf8" (UID: "b27317d7-bfc4-447d-aa03-bfe659ef0cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.623778 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt" (OuterVolumeSpecName: "kube-api-access-dscqt") pod "b27317d7-bfc4-447d-aa03-bfe659ef0cf8" (UID: "b27317d7-bfc4-447d-aa03-bfe659ef0cf8"). InnerVolumeSpecName "kube-api-access-dscqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.659565 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b27317d7-bfc4-447d-aa03-bfe659ef0cf8" (UID: "b27317d7-bfc4-447d-aa03-bfe659ef0cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.714534 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.714571 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dscqt\" (UniqueName: \"kubernetes.io/projected/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-kube-api-access-dscqt\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:02 crc kubenswrapper[4736]: I0214 11:16:02.714584 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27317d7-bfc4-447d-aa03-bfe659ef0cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.064582 4736 generic.go:334] "Generic (PLEG): container finished" podID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerID="af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a" exitCode=0 Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.064639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerDied","Data":"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a"} Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.064679 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7lww" event={"ID":"b27317d7-bfc4-447d-aa03-bfe659ef0cf8","Type":"ContainerDied","Data":"d95e93b6bea900b605d9ed898a1784d954f2130aed91584c8c9bb74923cb8bb5"} Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.064713 4736 scope.go:117] "RemoveContainer" containerID="af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.064939 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7lww" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.100158 4736 scope.go:117] "RemoveContainer" containerID="34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.121914 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.136472 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p7lww"] Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.161000 4736 scope.go:117] "RemoveContainer" containerID="e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.200779 4736 scope.go:117] "RemoveContainer" containerID="af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a" Feb 14 11:16:03 crc kubenswrapper[4736]: E0214 11:16:03.203136 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a\": container with ID starting with af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a not found: ID does not exist" containerID="af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.203181 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a"} err="failed to get container status \"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a\": rpc error: code = NotFound desc = could not find container \"af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a\": container with ID starting with af5796d434a8de693f29a81fac1e6bf7036ce53e74d7dfca4031c5495b58351a not found: ID does not exist" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.203202 4736 scope.go:117] "RemoveContainer" containerID="34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23" Feb 14 11:16:03 crc kubenswrapper[4736]: E0214 11:16:03.204553 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23\": container with ID starting with 34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23 not found: ID does not exist" containerID="34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.204576 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23"} err="failed to get container status \"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23\": rpc error: code = NotFound desc = could not find container \"34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23\": container with ID starting with 34a3303548894353926adcf3dfc33c89333668f463f674606b505e49266c7f23 not found: ID does not exist" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.204591 4736 scope.go:117] "RemoveContainer" containerID="e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c" Feb 14 11:16:03 crc kubenswrapper[4736]: E0214 11:16:03.204875 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c\": container with ID starting with e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c not found: ID does not exist" containerID="e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c" Feb 14 11:16:03 crc kubenswrapper[4736]: I0214 11:16:03.204898 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c"} err="failed to get container status \"e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c\": rpc error: code = NotFound desc = could not find container \"e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c\": container with ID starting with e3d4b52bc477d67db11084cd0d4f74c7f928a60a3b118ded1d3ecd4d5b01d34c not found: ID does not exist" Feb 14 11:16:04 crc kubenswrapper[4736]: I0214 11:16:04.003884 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:16:04 crc kubenswrapper[4736]: I0214 11:16:04.005022 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pm7pg" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" containerID="cri-o://466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1" gracePeriod=2 Feb 14 11:16:04 crc kubenswrapper[4736]: I0214 11:16:04.409461 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" path="/var/lib/kubelet/pods/b27317d7-bfc4-447d-aa03-bfe659ef0cf8/volumes" Feb 14 11:16:04 crc kubenswrapper[4736]: I0214 11:16:04.443992 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.032626 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content\") pod \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.032814 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzd4r\" (UniqueName: \"kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r\") pod \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.032874 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities\") pod \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\" (UID: \"61186271-bb8f-4457-b6b7-1fd53dbbc0d2\") " Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.046529 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities" (OuterVolumeSpecName: "utilities") pod "61186271-bb8f-4457-b6b7-1fd53dbbc0d2" (UID: "61186271-bb8f-4457-b6b7-1fd53dbbc0d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.054086 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r" (OuterVolumeSpecName: "kube-api-access-dzd4r") pod "61186271-bb8f-4457-b6b7-1fd53dbbc0d2" (UID: "61186271-bb8f-4457-b6b7-1fd53dbbc0d2"). InnerVolumeSpecName "kube-api-access-dzd4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.087731 4736 generic.go:334] "Generic (PLEG): container finished" podID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerID="466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1" exitCode=0 Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.087794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerDied","Data":"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1"} Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.087828 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm7pg" event={"ID":"61186271-bb8f-4457-b6b7-1fd53dbbc0d2","Type":"ContainerDied","Data":"fd5fcec9b73fb366eb73a39f129ceaba986ce5fcfd710fa3880d41527296f0a2"} Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.087849 4736 scope.go:117] "RemoveContainer" containerID="466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.087964 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm7pg" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.114248 4736 scope.go:117] "RemoveContainer" containerID="c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.138580 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzd4r\" (UniqueName: \"kubernetes.io/projected/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-kube-api-access-dzd4r\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.138835 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.150935 4736 scope.go:117] "RemoveContainer" containerID="d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.187443 4736 scope.go:117] "RemoveContainer" containerID="466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1" Feb 14 11:16:05 crc kubenswrapper[4736]: E0214 11:16:05.191927 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1\": container with ID starting with 466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1 not found: ID does not exist" containerID="466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.192112 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1"} err="failed to get container status \"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1\": rpc error: code = NotFound desc = could not find container \"466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1\": container with ID starting with 466617b6adbcabe4a178db8325c1353dca4db8a735e617a563b4c313ce9212d1 not found: ID does not exist" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.192267 4736 scope.go:117] "RemoveContainer" containerID="c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7" Feb 14 11:16:05 crc kubenswrapper[4736]: E0214 11:16:05.192831 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7\": container with ID starting with c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7 not found: ID does not exist" containerID="c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.192863 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7"} err="failed to get container status \"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7\": rpc error: code = NotFound desc = could not find container \"c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7\": container with ID starting with c269001d950de9a974b3469854b8698e1e857507d2e072d397d29be4aa9858a7 not found: ID does not exist" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.192882 4736 scope.go:117] "RemoveContainer" containerID="d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99" Feb 14 11:16:05 crc kubenswrapper[4736]: E0214 11:16:05.194624 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99\": container with ID starting with d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99 not found: ID does not exist" containerID="d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.194734 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99"} err="failed to get container status \"d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99\": rpc error: code = NotFound desc = could not find container \"d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99\": container with ID starting with d4af46babbc42dd0386b013160a8083cbbcb432a5d5c7ed7b44dfbe56b938e99 not found: ID does not exist" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.205137 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61186271-bb8f-4457-b6b7-1fd53dbbc0d2" (UID: "61186271-bb8f-4457-b6b7-1fd53dbbc0d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.240527 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61186271-bb8f-4457-b6b7-1fd53dbbc0d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.418706 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:16:05 crc kubenswrapper[4736]: I0214 11:16:05.425963 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pm7pg"] Feb 14 11:16:06 crc kubenswrapper[4736]: I0214 11:16:06.407581 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" path="/var/lib/kubelet/pods/61186271-bb8f-4457-b6b7-1fd53dbbc0d2/volumes" Feb 14 11:16:09 crc kubenswrapper[4736]: I0214 11:16:09.124889 4736 generic.go:334] "Generic (PLEG): container finished" podID="928b193b-069f-4f4b-80a6-13c347302fcf" containerID="e85c00f9d4601495e5d78535a07e9769d797a1acd807554ae20cf6857576f6a2" exitCode=0 Feb 14 11:16:09 crc kubenswrapper[4736]: I0214 11:16:09.125118 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" event={"ID":"928b193b-069f-4f4b-80a6-13c347302fcf","Type":"ContainerDied","Data":"e85c00f9d4601495e5d78535a07e9769d797a1acd807554ae20cf6857576f6a2"} Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.714536 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810486 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810544 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810575 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810617 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810657 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810711 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810757 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810802 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz88f\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810819 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810870 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810892 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.810963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"928b193b-069f-4f4b-80a6-13c347302fcf\" (UID: \"928b193b-069f-4f4b-80a6-13c347302fcf\") " Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.816862 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.816963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.817150 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.817589 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.818186 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.819240 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.819350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.820336 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.821461 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.821725 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.825005 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.835883 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f" (OuterVolumeSpecName: "kube-api-access-gz88f") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "kube-api-access-gz88f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.850349 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.859977 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory" (OuterVolumeSpecName: "inventory") pod "928b193b-069f-4f4b-80a6-13c347302fcf" (UID: "928b193b-069f-4f4b-80a6-13c347302fcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913041 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913073 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913084 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913096 4736 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913106 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913115 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913124 4736 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913136 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913147 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913158 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz88f\" (UniqueName: \"kubernetes.io/projected/928b193b-069f-4f4b-80a6-13c347302fcf-kube-api-access-gz88f\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913166 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913175 4736 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913186 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:10 crc kubenswrapper[4736]: I0214 11:16:10.913194 4736 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928b193b-069f-4f4b-80a6-13c347302fcf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.155615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" event={"ID":"928b193b-069f-4f4b-80a6-13c347302fcf","Type":"ContainerDied","Data":"6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294"} Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.155672 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.155687 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6956bb7078fa4286b5d649eb436c6964cba595dce2b6fd487193b996f888e294" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365102 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt"] Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365530 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="extract-utilities" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365546 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="extract-utilities" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365559 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="extract-content" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365566 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="extract-content" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365575 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365581 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365596 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="extract-utilities" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365604 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="extract-utilities" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365627 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365633 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365643 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b193b-069f-4f4b-80a6-13c347302fcf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365649 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b193b-069f-4f4b-80a6-13c347302fcf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 11:16:11 crc kubenswrapper[4736]: E0214 11:16:11.365663 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="extract-content" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365671 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="extract-content" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365873 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27317d7-bfc4-447d-aa03-bfe659ef0cf8" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365896 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b193b-069f-4f4b-80a6-13c347302fcf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.365905 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="61186271-bb8f-4457-b6b7-1fd53dbbc0d2" containerName="registry-server" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.366643 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.369811 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.377007 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.377462 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.377781 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.379342 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt"] Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.382862 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.421460 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.421561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnnd\" (UniqueName: \"kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.421592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.421641 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.421779 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.524218 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.524542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnnd\" (UniqueName: \"kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.524646 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.524738 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.525158 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.525632 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.528955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.529950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.533299 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.541940 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnnd\" (UniqueName: \"kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5wgrt\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:11 crc kubenswrapper[4736]: I0214 11:16:11.681456 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:16:12 crc kubenswrapper[4736]: I0214 11:16:12.252798 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt"] Feb 14 11:16:12 crc kubenswrapper[4736]: W0214 11:16:12.258086 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0fc1129_2e48_4afa_ad54_fce50eaaeddc.slice/crio-9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1 WatchSource:0}: Error finding container 9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1: Status 404 returned error can't find the container with id 9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1 Feb 14 11:16:13 crc kubenswrapper[4736]: I0214 11:16:13.172274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" event={"ID":"c0fc1129-2e48-4afa-ad54-fce50eaaeddc","Type":"ContainerStarted","Data":"ce42a2d81cb779eff4817457bc001ed5958bb13f09be5e206d92dd87ce97670f"} Feb 14 11:16:13 crc kubenswrapper[4736]: I0214 11:16:13.172581 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" event={"ID":"c0fc1129-2e48-4afa-ad54-fce50eaaeddc","Type":"ContainerStarted","Data":"9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1"} Feb 14 11:16:13 crc kubenswrapper[4736]: I0214 11:16:13.198192 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" podStartSLOduration=1.527878425 podStartE2EDuration="2.198174006s" podCreationTimestamp="2026-02-14 11:16:11 +0000 UTC" firstStartedPulling="2026-02-14 11:16:12.259921557 +0000 UTC m=+2082.628548925" lastFinishedPulling="2026-02-14 11:16:12.930217138 +0000 UTC m=+2083.298844506" observedRunningTime="2026-02-14 11:16:13.190419827 +0000 UTC m=+2083.559047215" watchObservedRunningTime="2026-02-14 11:16:13.198174006 +0000 UTC m=+2083.566801374" Feb 14 11:16:47 crc kubenswrapper[4736]: I0214 11:16:47.695756 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:16:47 crc kubenswrapper[4736]: I0214 11:16:47.696286 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:17:17 crc kubenswrapper[4736]: I0214 11:17:17.698102 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:17:17 crc kubenswrapper[4736]: I0214 11:17:17.698560 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:17:20 crc kubenswrapper[4736]: I0214 11:17:20.419786 4736 generic.go:334] "Generic (PLEG): container finished" podID="c0fc1129-2e48-4afa-ad54-fce50eaaeddc" containerID="ce42a2d81cb779eff4817457bc001ed5958bb13f09be5e206d92dd87ce97670f" exitCode=0 Feb 14 11:17:20 crc kubenswrapper[4736]: I0214 11:17:20.419868 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" event={"ID":"c0fc1129-2e48-4afa-ad54-fce50eaaeddc","Type":"ContainerDied","Data":"ce42a2d81cb779eff4817457bc001ed5958bb13f09be5e206d92dd87ce97670f"} Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.855615 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.978036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0\") pod \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.978297 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory\") pod \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.978572 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle\") pod \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.978998 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam\") pod \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.979141 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncnnd\" (UniqueName: \"kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd\") pod \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\" (UID: \"c0fc1129-2e48-4afa-ad54-fce50eaaeddc\") " Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.983045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c0fc1129-2e48-4afa-ad54-fce50eaaeddc" (UID: "c0fc1129-2e48-4afa-ad54-fce50eaaeddc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:17:21 crc kubenswrapper[4736]: I0214 11:17:21.983243 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd" (OuterVolumeSpecName: "kube-api-access-ncnnd") pod "c0fc1129-2e48-4afa-ad54-fce50eaaeddc" (UID: "c0fc1129-2e48-4afa-ad54-fce50eaaeddc"). InnerVolumeSpecName "kube-api-access-ncnnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.010256 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c0fc1129-2e48-4afa-ad54-fce50eaaeddc" (UID: "c0fc1129-2e48-4afa-ad54-fce50eaaeddc"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.021157 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0fc1129-2e48-4afa-ad54-fce50eaaeddc" (UID: "c0fc1129-2e48-4afa-ad54-fce50eaaeddc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.025926 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory" (OuterVolumeSpecName: "inventory") pod "c0fc1129-2e48-4afa-ad54-fce50eaaeddc" (UID: "c0fc1129-2e48-4afa-ad54-fce50eaaeddc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.081278 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.081559 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.081568 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncnnd\" (UniqueName: \"kubernetes.io/projected/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-kube-api-access-ncnnd\") on node \"crc\" DevicePath \"\"" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.081577 4736 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.081586 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fc1129-2e48-4afa-ad54-fce50eaaeddc-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.434927 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" event={"ID":"c0fc1129-2e48-4afa-ad54-fce50eaaeddc","Type":"ContainerDied","Data":"9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1"} Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.434965 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5cf57f8f5cfeaadd1fd4fe8ea9b1d8db0c2b9dc0ea4d680be6a6e9d6aec6a1" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.435025 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5wgrt" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.540618 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm"] Feb 14 11:17:22 crc kubenswrapper[4736]: E0214 11:17:22.541197 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fc1129-2e48-4afa-ad54-fce50eaaeddc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.541267 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fc1129-2e48-4afa-ad54-fce50eaaeddc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.541518 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fc1129-2e48-4afa-ad54-fce50eaaeddc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.542942 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.545870 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.546422 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.546793 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.547103 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.547391 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.552931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm"] Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.554325 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.695818 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.695882 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.695970 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjpr\" (UniqueName: \"kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.696002 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.696034 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.696326 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.797675 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjpr\" (UniqueName: \"kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.797912 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.798035 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.798199 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.798349 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.798449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.803313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.803316 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.803445 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.806555 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.807225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.822042 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjpr\" (UniqueName: \"kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:22 crc kubenswrapper[4736]: I0214 11:17:22.875579 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:17:23 crc kubenswrapper[4736]: I0214 11:17:23.511864 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm"] Feb 14 11:17:24 crc kubenswrapper[4736]: I0214 11:17:24.464109 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" event={"ID":"2c3c97eb-a17e-429f-84da-df394440c78c","Type":"ContainerStarted","Data":"f4153b1d019ef89a9050092a8553278d38768d747ff7873bb0829dfc903ae8a5"} Feb 14 11:17:24 crc kubenswrapper[4736]: I0214 11:17:24.464428 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" event={"ID":"2c3c97eb-a17e-429f-84da-df394440c78c","Type":"ContainerStarted","Data":"3de8abca5e03f26d086841d3c99976e2968379036de822cdb15402fa9f6119e2"} Feb 14 11:17:24 crc kubenswrapper[4736]: I0214 11:17:24.487870 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" podStartSLOduration=2.022730796 podStartE2EDuration="2.487849238s" podCreationTimestamp="2026-02-14 11:17:22 +0000 UTC" firstStartedPulling="2026-02-14 11:17:23.520964462 +0000 UTC m=+2153.889591830" lastFinishedPulling="2026-02-14 11:17:23.986082904 +0000 UTC m=+2154.354710272" observedRunningTime="2026-02-14 11:17:24.480967684 +0000 UTC m=+2154.849595072" watchObservedRunningTime="2026-02-14 11:17:24.487849238 +0000 UTC m=+2154.856476626" Feb 14 11:17:47 crc kubenswrapper[4736]: I0214 11:17:47.695453 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:17:47 crc kubenswrapper[4736]: I0214 11:17:47.697463 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:17:47 crc kubenswrapper[4736]: I0214 11:17:47.697714 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:17:47 crc kubenswrapper[4736]: I0214 11:17:47.698720 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:17:47 crc kubenswrapper[4736]: I0214 11:17:47.698871 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a" gracePeriod=600 Feb 14 11:17:48 crc kubenswrapper[4736]: I0214 11:17:48.656671 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a" exitCode=0 Feb 14 11:17:48 crc kubenswrapper[4736]: I0214 11:17:48.656750 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a"} Feb 14 11:17:48 crc kubenswrapper[4736]: I0214 11:17:48.657288 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a"} Feb 14 11:17:48 crc kubenswrapper[4736]: I0214 11:17:48.657308 4736 scope.go:117] "RemoveContainer" containerID="30b4e998b5a06f1f3f9679b813dad17650cd4e653ec85c416747705824fb516e" Feb 14 11:18:18 crc kubenswrapper[4736]: I0214 11:18:18.914864 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c3c97eb-a17e-429f-84da-df394440c78c" containerID="f4153b1d019ef89a9050092a8553278d38768d747ff7873bb0829dfc903ae8a5" exitCode=0 Feb 14 11:18:18 crc kubenswrapper[4736]: I0214 11:18:18.914966 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" event={"ID":"2c3c97eb-a17e-429f-84da-df394440c78c","Type":"ContainerDied","Data":"f4153b1d019ef89a9050092a8553278d38768d747ff7873bb0829dfc903ae8a5"} Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.609911 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.722722 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.722858 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.722954 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vjpr\" (UniqueName: \"kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.723023 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.723105 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.723160 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2c3c97eb-a17e-429f-84da-df394440c78c\" (UID: \"2c3c97eb-a17e-429f-84da-df394440c78c\") " Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.732887 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.747137 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr" (OuterVolumeSpecName: "kube-api-access-8vjpr") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "kube-api-access-8vjpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.751337 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.756346 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory" (OuterVolumeSpecName: "inventory") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.763361 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.772880 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c3c97eb-a17e-429f-84da-df394440c78c" (UID: "2c3c97eb-a17e-429f-84da-df394440c78c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828247 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828290 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828309 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828324 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828338 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vjpr\" (UniqueName: \"kubernetes.io/projected/2c3c97eb-a17e-429f-84da-df394440c78c-kube-api-access-8vjpr\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.828350 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c3c97eb-a17e-429f-84da-df394440c78c-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.933925 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" event={"ID":"2c3c97eb-a17e-429f-84da-df394440c78c","Type":"ContainerDied","Data":"3de8abca5e03f26d086841d3c99976e2968379036de822cdb15402fa9f6119e2"} Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.933969 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de8abca5e03f26d086841d3c99976e2968379036de822cdb15402fa9f6119e2" Feb 14 11:18:20 crc kubenswrapper[4736]: I0214 11:18:20.934383 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.061072 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw"] Feb 14 11:18:21 crc kubenswrapper[4736]: E0214 11:18:21.061445 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3c97eb-a17e-429f-84da-df394440c78c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.061461 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3c97eb-a17e-429f-84da-df394440c78c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.061638 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3c97eb-a17e-429f-84da-df394440c78c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.062270 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.067310 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.067450 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.067669 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.067673 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.068157 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.072549 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw"] Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.234361 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spx9f\" (UniqueName: \"kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.234427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.234488 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.234562 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.234582 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.336127 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.336206 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.336291 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.336311 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.336348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spx9f\" (UniqueName: \"kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.340250 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.341514 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.343165 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.349300 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.356970 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spx9f\" (UniqueName: \"kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.377213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:18:21 crc kubenswrapper[4736]: I0214 11:18:21.960797 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw"] Feb 14 11:18:22 crc kubenswrapper[4736]: I0214 11:18:22.965847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" event={"ID":"9e4b30a3-64e1-4f40-b895-41ac069e85f9","Type":"ContainerStarted","Data":"c4a6fdc10cb6963bfe2ada9d51380c68268408517d47bfb16b25e4104a26dca1"} Feb 14 11:18:22 crc kubenswrapper[4736]: I0214 11:18:22.966245 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" event={"ID":"9e4b30a3-64e1-4f40-b895-41ac069e85f9","Type":"ContainerStarted","Data":"ffefeb84a420a711d0cd34b8cfc86f37be4c20153eebf14872655c3dbe5444f8"} Feb 14 11:18:22 crc kubenswrapper[4736]: I0214 11:18:22.990346 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" podStartSLOduration=1.602045977 podStartE2EDuration="1.990325234s" podCreationTimestamp="2026-02-14 11:18:21 +0000 UTC" firstStartedPulling="2026-02-14 11:18:21.972956432 +0000 UTC m=+2212.341583800" lastFinishedPulling="2026-02-14 11:18:22.361235649 +0000 UTC m=+2212.729863057" observedRunningTime="2026-02-14 11:18:22.989041747 +0000 UTC m=+2213.357669145" watchObservedRunningTime="2026-02-14 11:18:22.990325234 +0000 UTC m=+2213.358952642" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.253552 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.256503 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.264215 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.336587 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.336814 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xksk9\" (UniqueName: \"kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.336871 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.438917 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.439022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.439159 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xksk9\" (UniqueName: \"kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.439387 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.439620 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.458358 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xksk9\" (UniqueName: \"kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9\") pod \"community-operators-6nqzq\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.586365 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.860431 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.872727 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.897726 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:36 crc kubenswrapper[4736]: I0214 11:18:36.950605 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.050812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.050891 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kh7t\" (UniqueName: \"kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.050944 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.153733 4736 generic.go:334] "Generic (PLEG): container finished" podID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerID="bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41" exitCode=0 Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.153996 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerDied","Data":"bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41"} Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.154019 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerStarted","Data":"0e77eace9a86ce59445eb10177ba54a11e4317726b904a46b38f5bf162164245"} Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.156308 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.156396 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kh7t\" (UniqueName: \"kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.156446 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.156983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.157011 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.183468 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kh7t\" (UniqueName: \"kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t\") pod \"redhat-marketplace-mrjwl\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.212169 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:37 crc kubenswrapper[4736]: I0214 11:18:37.666631 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:38 crc kubenswrapper[4736]: I0214 11:18:38.172735 4736 generic.go:334] "Generic (PLEG): container finished" podID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerID="00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394" exitCode=0 Feb 14 11:18:38 crc kubenswrapper[4736]: I0214 11:18:38.173207 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerDied","Data":"00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394"} Feb 14 11:18:38 crc kubenswrapper[4736]: I0214 11:18:38.173261 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerStarted","Data":"697f44656e9f29d58880c54a171669d1f244e63dd2abe450d1f8c52228658808"} Feb 14 11:18:39 crc kubenswrapper[4736]: I0214 11:18:39.181305 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerStarted","Data":"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f"} Feb 14 11:18:39 crc kubenswrapper[4736]: I0214 11:18:39.183040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerStarted","Data":"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408"} Feb 14 11:18:40 crc kubenswrapper[4736]: I0214 11:18:40.210967 4736 generic.go:334] "Generic (PLEG): container finished" podID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerID="de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408" exitCode=0 Feb 14 11:18:40 crc kubenswrapper[4736]: I0214 11:18:40.211023 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerDied","Data":"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408"} Feb 14 11:18:41 crc kubenswrapper[4736]: I0214 11:18:41.220435 4736 generic.go:334] "Generic (PLEG): container finished" podID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerID="4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f" exitCode=0 Feb 14 11:18:41 crc kubenswrapper[4736]: I0214 11:18:41.220538 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerDied","Data":"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f"} Feb 14 11:18:43 crc kubenswrapper[4736]: I0214 11:18:43.240628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerStarted","Data":"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1"} Feb 14 11:18:43 crc kubenswrapper[4736]: I0214 11:18:43.243525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerStarted","Data":"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a"} Feb 14 11:18:43 crc kubenswrapper[4736]: I0214 11:18:43.293835 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6nqzq" podStartSLOduration=1.877868051 podStartE2EDuration="7.293814423s" podCreationTimestamp="2026-02-14 11:18:36 +0000 UTC" firstStartedPulling="2026-02-14 11:18:37.15524973 +0000 UTC m=+2227.523877098" lastFinishedPulling="2026-02-14 11:18:42.571196102 +0000 UTC m=+2232.939823470" observedRunningTime="2026-02-14 11:18:43.26940055 +0000 UTC m=+2233.638027928" watchObservedRunningTime="2026-02-14 11:18:43.293814423 +0000 UTC m=+2233.662441811" Feb 14 11:18:43 crc kubenswrapper[4736]: I0214 11:18:43.300424 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mrjwl" podStartSLOduration=2.568279088 podStartE2EDuration="7.300411481s" podCreationTimestamp="2026-02-14 11:18:36 +0000 UTC" firstStartedPulling="2026-02-14 11:18:38.17678929 +0000 UTC m=+2228.545416688" lastFinishedPulling="2026-02-14 11:18:42.908921713 +0000 UTC m=+2233.277549081" observedRunningTime="2026-02-14 11:18:43.293582727 +0000 UTC m=+2233.662210105" watchObservedRunningTime="2026-02-14 11:18:43.300411481 +0000 UTC m=+2233.669038859" Feb 14 11:18:46 crc kubenswrapper[4736]: I0214 11:18:46.587157 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:46 crc kubenswrapper[4736]: I0214 11:18:46.587707 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:46 crc kubenswrapper[4736]: I0214 11:18:46.638152 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:47 crc kubenswrapper[4736]: I0214 11:18:47.213489 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:47 crc kubenswrapper[4736]: I0214 11:18:47.213775 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:47 crc kubenswrapper[4736]: I0214 11:18:47.261794 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:47 crc kubenswrapper[4736]: I0214 11:18:47.338498 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:48 crc kubenswrapper[4736]: I0214 11:18:48.238514 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:48 crc kubenswrapper[4736]: I0214 11:18:48.326911 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.297017 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6nqzq" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="registry-server" containerID="cri-o://d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1" gracePeriod=2 Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.641961 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.749222 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.878314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities\") pod \"e6418c44-3853-45ea-a18b-cb3071dfac5a\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.878584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xksk9\" (UniqueName: \"kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9\") pod \"e6418c44-3853-45ea-a18b-cb3071dfac5a\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.878694 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content\") pod \"e6418c44-3853-45ea-a18b-cb3071dfac5a\" (UID: \"e6418c44-3853-45ea-a18b-cb3071dfac5a\") " Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.879340 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities" (OuterVolumeSpecName: "utilities") pod "e6418c44-3853-45ea-a18b-cb3071dfac5a" (UID: "e6418c44-3853-45ea-a18b-cb3071dfac5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.885962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9" (OuterVolumeSpecName: "kube-api-access-xksk9") pod "e6418c44-3853-45ea-a18b-cb3071dfac5a" (UID: "e6418c44-3853-45ea-a18b-cb3071dfac5a"). InnerVolumeSpecName "kube-api-access-xksk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.944437 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6418c44-3853-45ea-a18b-cb3071dfac5a" (UID: "e6418c44-3853-45ea-a18b-cb3071dfac5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.980881 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xksk9\" (UniqueName: \"kubernetes.io/projected/e6418c44-3853-45ea-a18b-cb3071dfac5a-kube-api-access-xksk9\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.980926 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:49 crc kubenswrapper[4736]: I0214 11:18:49.980935 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6418c44-3853-45ea-a18b-cb3071dfac5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313404 4736 generic.go:334] "Generic (PLEG): container finished" podID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerID="d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1" exitCode=0 Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313470 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerDied","Data":"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1"} Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313504 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqzq" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313538 4736 scope.go:117] "RemoveContainer" containerID="d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313522 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqzq" event={"ID":"e6418c44-3853-45ea-a18b-cb3071dfac5a","Type":"ContainerDied","Data":"0e77eace9a86ce59445eb10177ba54a11e4317726b904a46b38f5bf162164245"} Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.313649 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mrjwl" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="registry-server" containerID="cri-o://2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a" gracePeriod=2 Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.351003 4736 scope.go:117] "RemoveContainer" containerID="de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.369135 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.376493 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6nqzq"] Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.409920 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" path="/var/lib/kubelet/pods/e6418c44-3853-45ea-a18b-cb3071dfac5a/volumes" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.439255 4736 scope.go:117] "RemoveContainer" containerID="bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.534502 4736 scope.go:117] "RemoveContainer" containerID="d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1" Feb 14 11:18:50 crc kubenswrapper[4736]: E0214 11:18:50.535040 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1\": container with ID starting with d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1 not found: ID does not exist" containerID="d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.535087 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1"} err="failed to get container status \"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1\": rpc error: code = NotFound desc = could not find container \"d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1\": container with ID starting with d8f78e45cc2fe2922a27014c2b9c87a289c74104a7ba45b0e7486c9e610cbdf1 not found: ID does not exist" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.535114 4736 scope.go:117] "RemoveContainer" containerID="de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408" Feb 14 11:18:50 crc kubenswrapper[4736]: E0214 11:18:50.535501 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408\": container with ID starting with de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408 not found: ID does not exist" containerID="de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.535546 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408"} err="failed to get container status \"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408\": rpc error: code = NotFound desc = could not find container \"de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408\": container with ID starting with de1018ca8c5d0f9ebb4cfd3342769a426e5f33526467b6eb75680ad9cf03e408 not found: ID does not exist" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.535574 4736 scope.go:117] "RemoveContainer" containerID="bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41" Feb 14 11:18:50 crc kubenswrapper[4736]: E0214 11:18:50.536883 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41\": container with ID starting with bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41 not found: ID does not exist" containerID="bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.536917 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41"} err="failed to get container status \"bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41\": rpc error: code = NotFound desc = could not find container \"bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41\": container with ID starting with bec4a236c0b91a94934a8f31a4c8a1fc5fead563fb76cb13e3e6cf45b0b70a41 not found: ID does not exist" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.747793 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.899187 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content\") pod \"15d7d4ef-516d-431b-be73-f5a087fa691b\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.899466 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kh7t\" (UniqueName: \"kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t\") pod \"15d7d4ef-516d-431b-be73-f5a087fa691b\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.899621 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities\") pod \"15d7d4ef-516d-431b-be73-f5a087fa691b\" (UID: \"15d7d4ef-516d-431b-be73-f5a087fa691b\") " Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.900575 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities" (OuterVolumeSpecName: "utilities") pod "15d7d4ef-516d-431b-be73-f5a087fa691b" (UID: "15d7d4ef-516d-431b-be73-f5a087fa691b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.901307 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.905427 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t" (OuterVolumeSpecName: "kube-api-access-7kh7t") pod "15d7d4ef-516d-431b-be73-f5a087fa691b" (UID: "15d7d4ef-516d-431b-be73-f5a087fa691b"). InnerVolumeSpecName "kube-api-access-7kh7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:18:50 crc kubenswrapper[4736]: I0214 11:18:50.920155 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15d7d4ef-516d-431b-be73-f5a087fa691b" (UID: "15d7d4ef-516d-431b-be73-f5a087fa691b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.002901 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d7d4ef-516d-431b-be73-f5a087fa691b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.002939 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kh7t\" (UniqueName: \"kubernetes.io/projected/15d7d4ef-516d-431b-be73-f5a087fa691b-kube-api-access-7kh7t\") on node \"crc\" DevicePath \"\"" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.327129 4736 generic.go:334] "Generic (PLEG): container finished" podID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerID="2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a" exitCode=0 Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.327194 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerDied","Data":"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a"} Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.327414 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrjwl" event={"ID":"15d7d4ef-516d-431b-be73-f5a087fa691b","Type":"ContainerDied","Data":"697f44656e9f29d58880c54a171669d1f244e63dd2abe450d1f8c52228658808"} Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.327431 4736 scope.go:117] "RemoveContainer" containerID="2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.327268 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrjwl" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.346915 4736 scope.go:117] "RemoveContainer" containerID="4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.375515 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.376902 4736 scope.go:117] "RemoveContainer" containerID="00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.413292 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrjwl"] Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.417699 4736 scope.go:117] "RemoveContainer" containerID="2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a" Feb 14 11:18:51 crc kubenswrapper[4736]: E0214 11:18:51.421448 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a\": container with ID starting with 2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a not found: ID does not exist" containerID="2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.421484 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a"} err="failed to get container status \"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a\": rpc error: code = NotFound desc = could not find container \"2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a\": container with ID starting with 2f8b22e27db06640cf8d27b1799474d63f7918402842ab6da7de333820f4e82a not found: ID does not exist" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.421512 4736 scope.go:117] "RemoveContainer" containerID="4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f" Feb 14 11:18:51 crc kubenswrapper[4736]: E0214 11:18:51.421777 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f\": container with ID starting with 4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f not found: ID does not exist" containerID="4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.421800 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f"} err="failed to get container status \"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f\": rpc error: code = NotFound desc = could not find container \"4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f\": container with ID starting with 4588b0dd46b35f8c7ad3370ebf0f01df3496ee6ad8ee5b1734d466c86b64680f not found: ID does not exist" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.421815 4736 scope.go:117] "RemoveContainer" containerID="00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394" Feb 14 11:18:51 crc kubenswrapper[4736]: E0214 11:18:51.422126 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394\": container with ID starting with 00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394 not found: ID does not exist" containerID="00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394" Feb 14 11:18:51 crc kubenswrapper[4736]: I0214 11:18:51.422150 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394"} err="failed to get container status \"00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394\": rpc error: code = NotFound desc = could not find container \"00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394\": container with ID starting with 00a53e04b5215982dacca204927feb1c3a43424969c4b4de80eaffb3a59ad394 not found: ID does not exist" Feb 14 11:18:52 crc kubenswrapper[4736]: I0214 11:18:52.411315 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" path="/var/lib/kubelet/pods/15d7d4ef-516d-431b-be73-f5a087fa691b/volumes" Feb 14 11:20:17 crc kubenswrapper[4736]: I0214 11:20:17.695908 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:20:17 crc kubenswrapper[4736]: I0214 11:20:17.696493 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:20:47 crc kubenswrapper[4736]: I0214 11:20:47.695931 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:20:47 crc kubenswrapper[4736]: I0214 11:20:47.696696 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:21:17 crc kubenswrapper[4736]: I0214 11:21:17.695272 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:21:17 crc kubenswrapper[4736]: I0214 11:21:17.695717 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:21:17 crc kubenswrapper[4736]: I0214 11:21:17.695778 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:21:17 crc kubenswrapper[4736]: I0214 11:21:17.696469 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:21:17 crc kubenswrapper[4736]: I0214 11:21:17.696513 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" gracePeriod=600 Feb 14 11:21:17 crc kubenswrapper[4736]: E0214 11:21:17.833860 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:21:18 crc kubenswrapper[4736]: I0214 11:21:18.691495 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" exitCode=0 Feb 14 11:21:18 crc kubenswrapper[4736]: I0214 11:21:18.691553 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a"} Feb 14 11:21:18 crc kubenswrapper[4736]: I0214 11:21:18.691592 4736 scope.go:117] "RemoveContainer" containerID="63627cf082421e3ad56b0c4fcb8aa173da170f9bb409ebe8bcd959c560af7e4a" Feb 14 11:21:18 crc kubenswrapper[4736]: I0214 11:21:18.692356 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:21:18 crc kubenswrapper[4736]: E0214 11:21:18.692586 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:21:31 crc kubenswrapper[4736]: I0214 11:21:31.397844 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:21:31 crc kubenswrapper[4736]: E0214 11:21:31.399260 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:21:46 crc kubenswrapper[4736]: I0214 11:21:46.397771 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:21:46 crc kubenswrapper[4736]: E0214 11:21:46.400009 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:21:59 crc kubenswrapper[4736]: I0214 11:21:59.397675 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:21:59 crc kubenswrapper[4736]: E0214 11:21:59.398485 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:22:12 crc kubenswrapper[4736]: I0214 11:22:12.397564 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:22:12 crc kubenswrapper[4736]: E0214 11:22:12.398357 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:22:17 crc kubenswrapper[4736]: I0214 11:22:17.218816 4736 generic.go:334] "Generic (PLEG): container finished" podID="9e4b30a3-64e1-4f40-b895-41ac069e85f9" containerID="c4a6fdc10cb6963bfe2ada9d51380c68268408517d47bfb16b25e4104a26dca1" exitCode=0 Feb 14 11:22:17 crc kubenswrapper[4736]: I0214 11:22:17.218851 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" event={"ID":"9e4b30a3-64e1-4f40-b895-41ac069e85f9","Type":"ContainerDied","Data":"c4a6fdc10cb6963bfe2ada9d51380c68268408517d47bfb16b25e4104a26dca1"} Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.625087 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.723807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle\") pod \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.723879 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spx9f\" (UniqueName: \"kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f\") pod \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.723921 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory\") pod \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.723963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0\") pod \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.723986 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam\") pod \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\" (UID: \"9e4b30a3-64e1-4f40-b895-41ac069e85f9\") " Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.729137 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9e4b30a3-64e1-4f40-b895-41ac069e85f9" (UID: "9e4b30a3-64e1-4f40-b895-41ac069e85f9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.744312 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f" (OuterVolumeSpecName: "kube-api-access-spx9f") pod "9e4b30a3-64e1-4f40-b895-41ac069e85f9" (UID: "9e4b30a3-64e1-4f40-b895-41ac069e85f9"). InnerVolumeSpecName "kube-api-access-spx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.749074 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory" (OuterVolumeSpecName: "inventory") pod "9e4b30a3-64e1-4f40-b895-41ac069e85f9" (UID: "9e4b30a3-64e1-4f40-b895-41ac069e85f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.758298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9e4b30a3-64e1-4f40-b895-41ac069e85f9" (UID: "9e4b30a3-64e1-4f40-b895-41ac069e85f9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.761434 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9e4b30a3-64e1-4f40-b895-41ac069e85f9" (UID: "9e4b30a3-64e1-4f40-b895-41ac069e85f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.825799 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spx9f\" (UniqueName: \"kubernetes.io/projected/9e4b30a3-64e1-4f40-b895-41ac069e85f9-kube-api-access-spx9f\") on node \"crc\" DevicePath \"\"" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.825826 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.825839 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.825853 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:22:18 crc kubenswrapper[4736]: I0214 11:22:18.825865 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4b30a3-64e1-4f40-b895-41ac069e85f9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.237302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" event={"ID":"9e4b30a3-64e1-4f40-b895-41ac069e85f9","Type":"ContainerDied","Data":"ffefeb84a420a711d0cd34b8cfc86f37be4c20153eebf14872655c3dbe5444f8"} Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.237620 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffefeb84a420a711d0cd34b8cfc86f37be4c20153eebf14872655c3dbe5444f8" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.237701 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363414 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2"] Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363831 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="extract-content" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363853 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="extract-content" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363869 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="extract-utilities" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363878 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="extract-utilities" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363890 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="extract-content" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363898 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="extract-content" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363917 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363923 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363942 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363948 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363957 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="extract-utilities" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363963 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="extract-utilities" Feb 14 11:22:19 crc kubenswrapper[4736]: E0214 11:22:19.363973 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4b30a3-64e1-4f40-b895-41ac069e85f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.363979 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4b30a3-64e1-4f40-b895-41ac069e85f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.364140 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d7d4ef-516d-431b-be73-f5a087fa691b" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.364156 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6418c44-3853-45ea-a18b-cb3071dfac5a" containerName="registry-server" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.364171 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4b30a3-64e1-4f40-b895-41ac069e85f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.364776 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370471 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370532 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370671 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370807 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370808 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.370951 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.371131 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.373226 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2"] Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.540727 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.541396 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542604 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542703 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542773 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmpl\" (UniqueName: \"kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542797 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542847 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.542866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643467 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643591 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmpl\" (UniqueName: \"kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643679 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643708 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643818 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643856 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.643937 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.645607 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.659446 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.659446 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.659705 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.659909 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.660245 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.660379 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.663303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.669255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmpl\" (UniqueName: \"kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sj8v2\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:19 crc kubenswrapper[4736]: I0214 11:22:19.691286 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:22:20 crc kubenswrapper[4736]: I0214 11:22:20.229810 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:22:20 crc kubenswrapper[4736]: I0214 11:22:20.233760 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2"] Feb 14 11:22:20 crc kubenswrapper[4736]: I0214 11:22:20.249243 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" event={"ID":"e40d6c31-4f67-46cc-b2a2-991133a68003","Type":"ContainerStarted","Data":"c336d72473978ea284f1d3abf89dd0a54ba484b12b48c8590a8c2ae17c0d78d5"} Feb 14 11:22:21 crc kubenswrapper[4736]: I0214 11:22:21.258553 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" event={"ID":"e40d6c31-4f67-46cc-b2a2-991133a68003","Type":"ContainerStarted","Data":"d7068c5cfe1baca9c86eeacc1a5d5b9cf7f681188360a88f4be7b47fabd10a24"} Feb 14 11:22:21 crc kubenswrapper[4736]: I0214 11:22:21.279495 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" podStartSLOduration=1.829675344 podStartE2EDuration="2.279473829s" podCreationTimestamp="2026-02-14 11:22:19 +0000 UTC" firstStartedPulling="2026-02-14 11:22:20.229259181 +0000 UTC m=+2450.597886559" lastFinishedPulling="2026-02-14 11:22:20.679057676 +0000 UTC m=+2451.047685044" observedRunningTime="2026-02-14 11:22:21.274887459 +0000 UTC m=+2451.643514837" watchObservedRunningTime="2026-02-14 11:22:21.279473829 +0000 UTC m=+2451.648101217" Feb 14 11:22:25 crc kubenswrapper[4736]: I0214 11:22:25.397590 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:22:25 crc kubenswrapper[4736]: E0214 11:22:25.398234 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:22:36 crc kubenswrapper[4736]: I0214 11:22:36.397311 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:22:36 crc kubenswrapper[4736]: E0214 11:22:36.398059 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:22:51 crc kubenswrapper[4736]: I0214 11:22:51.397732 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:22:51 crc kubenswrapper[4736]: E0214 11:22:51.398572 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:23:05 crc kubenswrapper[4736]: I0214 11:23:05.398195 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:23:05 crc kubenswrapper[4736]: E0214 11:23:05.399295 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:23:17 crc kubenswrapper[4736]: I0214 11:23:17.398645 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:23:17 crc kubenswrapper[4736]: E0214 11:23:17.399510 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:23:31 crc kubenswrapper[4736]: I0214 11:23:31.397893 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:23:31 crc kubenswrapper[4736]: E0214 11:23:31.399017 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:23:44 crc kubenswrapper[4736]: I0214 11:23:44.399702 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:23:44 crc kubenswrapper[4736]: E0214 11:23:44.403027 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:23:59 crc kubenswrapper[4736]: I0214 11:23:59.397557 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:23:59 crc kubenswrapper[4736]: E0214 11:23:59.398541 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:24:13 crc kubenswrapper[4736]: I0214 11:24:13.397152 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:24:13 crc kubenswrapper[4736]: E0214 11:24:13.398079 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:24:25 crc kubenswrapper[4736]: I0214 11:24:25.397072 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:24:25 crc kubenswrapper[4736]: E0214 11:24:25.397884 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:24:38 crc kubenswrapper[4736]: I0214 11:24:38.399465 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:24:38 crc kubenswrapper[4736]: E0214 11:24:38.400234 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:24:53 crc kubenswrapper[4736]: I0214 11:24:53.413689 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:24:53 crc kubenswrapper[4736]: E0214 11:24:53.415017 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:07 crc kubenswrapper[4736]: I0214 11:25:07.397678 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:25:07 crc kubenswrapper[4736]: E0214 11:25:07.398664 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:11 crc kubenswrapper[4736]: I0214 11:25:11.110130 4736 generic.go:334] "Generic (PLEG): container finished" podID="e40d6c31-4f67-46cc-b2a2-991133a68003" containerID="d7068c5cfe1baca9c86eeacc1a5d5b9cf7f681188360a88f4be7b47fabd10a24" exitCode=0 Feb 14 11:25:11 crc kubenswrapper[4736]: I0214 11:25:11.110249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" event={"ID":"e40d6c31-4f67-46cc-b2a2-991133a68003","Type":"ContainerDied","Data":"d7068c5cfe1baca9c86eeacc1a5d5b9cf7f681188360a88f4be7b47fabd10a24"} Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.608929 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792448 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792522 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792690 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792893 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792934 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.792969 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.793074 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.793118 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrmpl\" (UniqueName: \"kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl\") pod \"e40d6c31-4f67-46cc-b2a2-991133a68003\" (UID: \"e40d6c31-4f67-46cc-b2a2-991133a68003\") " Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.798288 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.801230 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl" (OuterVolumeSpecName: "kube-api-access-qrmpl") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "kube-api-access-qrmpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.822006 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.825557 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory" (OuterVolumeSpecName: "inventory") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.830600 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.855712 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.860262 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.861675 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.870023 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e40d6c31-4f67-46cc-b2a2-991133a68003" (UID: "e40d6c31-4f67-46cc-b2a2-991133a68003"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895770 4736 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895800 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrmpl\" (UniqueName: \"kubernetes.io/projected/e40d6c31-4f67-46cc-b2a2-991133a68003-kube-api-access-qrmpl\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895810 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895820 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895831 4736 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895838 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895846 4736 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895854 4736 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e40d6c31-4f67-46cc-b2a2-991133a68003-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:12 crc kubenswrapper[4736]: I0214 11:25:12.895862 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e40d6c31-4f67-46cc-b2a2-991133a68003-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.128966 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" event={"ID":"e40d6c31-4f67-46cc-b2a2-991133a68003","Type":"ContainerDied","Data":"c336d72473978ea284f1d3abf89dd0a54ba484b12b48c8590a8c2ae17c0d78d5"} Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.129367 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c336d72473978ea284f1d3abf89dd0a54ba484b12b48c8590a8c2ae17c0d78d5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.129343 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sj8v2" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.335799 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5"] Feb 14 11:25:13 crc kubenswrapper[4736]: E0214 11:25:13.336727 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40d6c31-4f67-46cc-b2a2-991133a68003" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.336765 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40d6c31-4f67-46cc-b2a2-991133a68003" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.337005 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40d6c31-4f67-46cc-b2a2-991133a68003" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.337700 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.342056 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.342183 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.342318 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.342371 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.391720 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ds4ss" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.414237 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5"] Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506001 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506062 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506102 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twdh9\" (UniqueName: \"kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.506500 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.507719 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.609798 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.609889 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.610059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.610128 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.610190 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.610243 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.610275 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twdh9\" (UniqueName: \"kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.615411 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.616217 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.618384 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.619215 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.625175 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.631105 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.632696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twdh9\" (UniqueName: \"kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:13 crc kubenswrapper[4736]: I0214 11:25:13.707693 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:25:14 crc kubenswrapper[4736]: I0214 11:25:14.284138 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5"] Feb 14 11:25:15 crc kubenswrapper[4736]: I0214 11:25:15.147222 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" event={"ID":"8c44413d-b97e-45f6-80d1-71f5e489c4ac","Type":"ContainerStarted","Data":"85e6b5d4d781134ff354fee3575fd8a1f3fcca0699fabaeb1c11b3a604c6a6aa"} Feb 14 11:25:15 crc kubenswrapper[4736]: I0214 11:25:15.147577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" event={"ID":"8c44413d-b97e-45f6-80d1-71f5e489c4ac","Type":"ContainerStarted","Data":"285038664b526995103bd6dfd075e6fe6e80cf5c72e78929a555d689d7af6801"} Feb 14 11:25:15 crc kubenswrapper[4736]: I0214 11:25:15.176700 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" podStartSLOduration=1.687020185 podStartE2EDuration="2.176681717s" podCreationTimestamp="2026-02-14 11:25:13 +0000 UTC" firstStartedPulling="2026-02-14 11:25:14.291394065 +0000 UTC m=+2624.660021433" lastFinishedPulling="2026-02-14 11:25:14.781055587 +0000 UTC m=+2625.149682965" observedRunningTime="2026-02-14 11:25:15.167266809 +0000 UTC m=+2625.535894197" watchObservedRunningTime="2026-02-14 11:25:15.176681717 +0000 UTC m=+2625.545309085" Feb 14 11:25:19 crc kubenswrapper[4736]: I0214 11:25:19.397816 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:25:19 crc kubenswrapper[4736]: E0214 11:25:19.398792 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:30 crc kubenswrapper[4736]: I0214 11:25:30.403802 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:25:30 crc kubenswrapper[4736]: E0214 11:25:30.404442 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:44 crc kubenswrapper[4736]: I0214 11:25:44.405882 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:25:44 crc kubenswrapper[4736]: E0214 11:25:44.406588 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.644507 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.647505 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.663640 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.694662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.694837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.694948 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrs6g\" (UniqueName: \"kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.796506 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.796573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.796625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrs6g\" (UniqueName: \"kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.797088 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.797366 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.832098 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrs6g\" (UniqueName: \"kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g\") pod \"certified-operators-sksq9\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:45 crc kubenswrapper[4736]: I0214 11:25:45.970865 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:46 crc kubenswrapper[4736]: I0214 11:25:46.555816 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:47 crc kubenswrapper[4736]: I0214 11:25:47.476256 4736 generic.go:334] "Generic (PLEG): container finished" podID="e8df5df5-dc27-4179-8747-2e9710903941" containerID="4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7" exitCode=0 Feb 14 11:25:47 crc kubenswrapper[4736]: I0214 11:25:47.476526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerDied","Data":"4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7"} Feb 14 11:25:47 crc kubenswrapper[4736]: I0214 11:25:47.476551 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerStarted","Data":"40ffc568568f35622641b53923b39c40f8f01866f3280191e0c4ea28fc9cd742"} Feb 14 11:25:48 crc kubenswrapper[4736]: I0214 11:25:48.486762 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerStarted","Data":"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290"} Feb 14 11:25:50 crc kubenswrapper[4736]: I0214 11:25:50.508626 4736 generic.go:334] "Generic (PLEG): container finished" podID="e8df5df5-dc27-4179-8747-2e9710903941" containerID="9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290" exitCode=0 Feb 14 11:25:50 crc kubenswrapper[4736]: I0214 11:25:50.508729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerDied","Data":"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290"} Feb 14 11:25:51 crc kubenswrapper[4736]: I0214 11:25:51.520367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerStarted","Data":"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5"} Feb 14 11:25:51 crc kubenswrapper[4736]: I0214 11:25:51.542622 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sksq9" podStartSLOduration=3.107273514 podStartE2EDuration="6.542601081s" podCreationTimestamp="2026-02-14 11:25:45 +0000 UTC" firstStartedPulling="2026-02-14 11:25:47.480464676 +0000 UTC m=+2657.849092044" lastFinishedPulling="2026-02-14 11:25:50.915792203 +0000 UTC m=+2661.284419611" observedRunningTime="2026-02-14 11:25:51.542158908 +0000 UTC m=+2661.910786296" watchObservedRunningTime="2026-02-14 11:25:51.542601081 +0000 UTC m=+2661.911228449" Feb 14 11:25:55 crc kubenswrapper[4736]: I0214 11:25:55.971679 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:55 crc kubenswrapper[4736]: I0214 11:25:55.972105 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:56 crc kubenswrapper[4736]: I0214 11:25:56.038179 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:56 crc kubenswrapper[4736]: I0214 11:25:56.659225 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:56 crc kubenswrapper[4736]: I0214 11:25:56.713868 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:57 crc kubenswrapper[4736]: I0214 11:25:57.396947 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:25:57 crc kubenswrapper[4736]: E0214 11:25:57.397165 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:25:58 crc kubenswrapper[4736]: I0214 11:25:58.597727 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sksq9" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="registry-server" containerID="cri-o://3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5" gracePeriod=2 Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.059241 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.165715 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content\") pod \"e8df5df5-dc27-4179-8747-2e9710903941\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.165772 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrs6g\" (UniqueName: \"kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g\") pod \"e8df5df5-dc27-4179-8747-2e9710903941\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.166061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities\") pod \"e8df5df5-dc27-4179-8747-2e9710903941\" (UID: \"e8df5df5-dc27-4179-8747-2e9710903941\") " Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.167046 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities" (OuterVolumeSpecName: "utilities") pod "e8df5df5-dc27-4179-8747-2e9710903941" (UID: "e8df5df5-dc27-4179-8747-2e9710903941"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.170955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g" (OuterVolumeSpecName: "kube-api-access-zrs6g") pod "e8df5df5-dc27-4179-8747-2e9710903941" (UID: "e8df5df5-dc27-4179-8747-2e9710903941"). InnerVolumeSpecName "kube-api-access-zrs6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.215567 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8df5df5-dc27-4179-8747-2e9710903941" (UID: "e8df5df5-dc27-4179-8747-2e9710903941"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.268349 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.268384 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8df5df5-dc27-4179-8747-2e9710903941-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.268398 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrs6g\" (UniqueName: \"kubernetes.io/projected/e8df5df5-dc27-4179-8747-2e9710903941-kube-api-access-zrs6g\") on node \"crc\" DevicePath \"\"" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.612317 4736 generic.go:334] "Generic (PLEG): container finished" podID="e8df5df5-dc27-4179-8747-2e9710903941" containerID="3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5" exitCode=0 Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.612360 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sksq9" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.612368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerDied","Data":"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5"} Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.612530 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sksq9" event={"ID":"e8df5df5-dc27-4179-8747-2e9710903941","Type":"ContainerDied","Data":"40ffc568568f35622641b53923b39c40f8f01866f3280191e0c4ea28fc9cd742"} Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.612555 4736 scope.go:117] "RemoveContainer" containerID="3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.644769 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.646687 4736 scope.go:117] "RemoveContainer" containerID="9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.655873 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sksq9"] Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.668216 4736 scope.go:117] "RemoveContainer" containerID="4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.707933 4736 scope.go:117] "RemoveContainer" containerID="3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5" Feb 14 11:25:59 crc kubenswrapper[4736]: E0214 11:25:59.708329 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5\": container with ID starting with 3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5 not found: ID does not exist" containerID="3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.708360 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5"} err="failed to get container status \"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5\": rpc error: code = NotFound desc = could not find container \"3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5\": container with ID starting with 3e46ebfcfde3d7df8078470eb8ceb73bca7ff5016c3dc0c91732f682f7ca19d5 not found: ID does not exist" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.708381 4736 scope.go:117] "RemoveContainer" containerID="9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290" Feb 14 11:25:59 crc kubenswrapper[4736]: E0214 11:25:59.708770 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290\": container with ID starting with 9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290 not found: ID does not exist" containerID="9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.708796 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290"} err="failed to get container status \"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290\": rpc error: code = NotFound desc = could not find container \"9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290\": container with ID starting with 9f091c95f61a4652d5f3b10873698aa641645a2af3e8908856a940cff6b31290 not found: ID does not exist" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.708809 4736 scope.go:117] "RemoveContainer" containerID="4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7" Feb 14 11:25:59 crc kubenswrapper[4736]: E0214 11:25:59.709067 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7\": container with ID starting with 4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7 not found: ID does not exist" containerID="4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7" Feb 14 11:25:59 crc kubenswrapper[4736]: I0214 11:25:59.709091 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7"} err="failed to get container status \"4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7\": rpc error: code = NotFound desc = could not find container \"4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7\": container with ID starting with 4e043e71da344130ff07af409df1c20d89c0fcba3327cf1e6b74d4b7ec734fc7 not found: ID does not exist" Feb 14 11:26:00 crc kubenswrapper[4736]: I0214 11:26:00.416012 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8df5df5-dc27-4179-8747-2e9710903941" path="/var/lib/kubelet/pods/e8df5df5-dc27-4179-8747-2e9710903941/volumes" Feb 14 11:26:10 crc kubenswrapper[4736]: I0214 11:26:10.422482 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:26:10 crc kubenswrapper[4736]: E0214 11:26:10.424376 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:26:21 crc kubenswrapper[4736]: I0214 11:26:21.397123 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:26:22 crc kubenswrapper[4736]: I0214 11:26:22.084341 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe"} Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.294148 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:26:36 crc kubenswrapper[4736]: E0214 11:26:36.295103 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="extract-utilities" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.295118 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="extract-utilities" Feb 14 11:26:36 crc kubenswrapper[4736]: E0214 11:26:36.295137 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="extract-content" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.295147 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="extract-content" Feb 14 11:26:36 crc kubenswrapper[4736]: E0214 11:26:36.295169 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="registry-server" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.295180 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="registry-server" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.295396 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8df5df5-dc27-4179-8747-2e9710903941" containerName="registry-server" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.297186 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.312682 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.403125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8n28\" (UniqueName: \"kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.403311 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.403397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.504567 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.504654 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8n28\" (UniqueName: \"kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.504776 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.505175 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.505270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.527338 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8n28\" (UniqueName: \"kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28\") pod \"redhat-operators-g45jm\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:36 crc kubenswrapper[4736]: I0214 11:26:36.619943 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:37 crc kubenswrapper[4736]: I0214 11:26:37.120326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:26:37 crc kubenswrapper[4736]: I0214 11:26:37.363243 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerDied","Data":"fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2"} Feb 14 11:26:37 crc kubenswrapper[4736]: I0214 11:26:37.363097 4736 generic.go:334] "Generic (PLEG): container finished" podID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerID="fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2" exitCode=0 Feb 14 11:26:37 crc kubenswrapper[4736]: I0214 11:26:37.364219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerStarted","Data":"71f25ba5a11b7a836a5d2f6ec1e52b44a84fb1e6d692737c721dd8dd918431f5"} Feb 14 11:26:38 crc kubenswrapper[4736]: I0214 11:26:38.393630 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerStarted","Data":"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e"} Feb 14 11:26:43 crc kubenswrapper[4736]: I0214 11:26:43.438658 4736 generic.go:334] "Generic (PLEG): container finished" podID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerID="6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e" exitCode=0 Feb 14 11:26:43 crc kubenswrapper[4736]: I0214 11:26:43.438674 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerDied","Data":"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e"} Feb 14 11:26:44 crc kubenswrapper[4736]: I0214 11:26:44.453390 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerStarted","Data":"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9"} Feb 14 11:26:44 crc kubenswrapper[4736]: I0214 11:26:44.488777 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g45jm" podStartSLOduration=1.9868525400000001 podStartE2EDuration="8.488762259s" podCreationTimestamp="2026-02-14 11:26:36 +0000 UTC" firstStartedPulling="2026-02-14 11:26:37.364661964 +0000 UTC m=+2707.733289332" lastFinishedPulling="2026-02-14 11:26:43.866571683 +0000 UTC m=+2714.235199051" observedRunningTime="2026-02-14 11:26:44.476015857 +0000 UTC m=+2714.844643235" watchObservedRunningTime="2026-02-14 11:26:44.488762259 +0000 UTC m=+2714.857389637" Feb 14 11:26:46 crc kubenswrapper[4736]: I0214 11:26:46.621483 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:46 crc kubenswrapper[4736]: I0214 11:26:46.622019 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:26:47 crc kubenswrapper[4736]: I0214 11:26:47.661674 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g45jm" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" probeResult="failure" output=< Feb 14 11:26:47 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:26:47 crc kubenswrapper[4736]: > Feb 14 11:26:57 crc kubenswrapper[4736]: I0214 11:26:57.682612 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g45jm" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" probeResult="failure" output=< Feb 14 11:26:57 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:26:57 crc kubenswrapper[4736]: > Feb 14 11:27:06 crc kubenswrapper[4736]: I0214 11:27:06.674075 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:27:06 crc kubenswrapper[4736]: I0214 11:27:06.728616 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:27:07 crc kubenswrapper[4736]: I0214 11:27:07.496871 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:27:08 crc kubenswrapper[4736]: I0214 11:27:08.683876 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g45jm" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" containerID="cri-o://c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9" gracePeriod=2 Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.154416 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.285906 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8n28\" (UniqueName: \"kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28\") pod \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.286228 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content\") pod \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.286264 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities\") pod \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\" (UID: \"02e87460-ca35-44e0-8c6c-c12ab62de7a0\") " Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.287045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities" (OuterVolumeSpecName: "utilities") pod "02e87460-ca35-44e0-8c6c-c12ab62de7a0" (UID: "02e87460-ca35-44e0-8c6c-c12ab62de7a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.294102 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28" (OuterVolumeSpecName: "kube-api-access-v8n28") pod "02e87460-ca35-44e0-8c6c-c12ab62de7a0" (UID: "02e87460-ca35-44e0-8c6c-c12ab62de7a0"). InnerVolumeSpecName "kube-api-access-v8n28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.387972 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.388018 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8n28\" (UniqueName: \"kubernetes.io/projected/02e87460-ca35-44e0-8c6c-c12ab62de7a0-kube-api-access-v8n28\") on node \"crc\" DevicePath \"\"" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.422290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02e87460-ca35-44e0-8c6c-c12ab62de7a0" (UID: "02e87460-ca35-44e0-8c6c-c12ab62de7a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.490520 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e87460-ca35-44e0-8c6c-c12ab62de7a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.695273 4736 generic.go:334] "Generic (PLEG): container finished" podID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerID="c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9" exitCode=0 Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.695340 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g45jm" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.695343 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerDied","Data":"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9"} Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.695483 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g45jm" event={"ID":"02e87460-ca35-44e0-8c6c-c12ab62de7a0","Type":"ContainerDied","Data":"71f25ba5a11b7a836a5d2f6ec1e52b44a84fb1e6d692737c721dd8dd918431f5"} Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.695509 4736 scope.go:117] "RemoveContainer" containerID="c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.737884 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.741203 4736 scope.go:117] "RemoveContainer" containerID="6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.749997 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g45jm"] Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.768511 4736 scope.go:117] "RemoveContainer" containerID="fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.812011 4736 scope.go:117] "RemoveContainer" containerID="c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9" Feb 14 11:27:09 crc kubenswrapper[4736]: E0214 11:27:09.812528 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9\": container with ID starting with c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9 not found: ID does not exist" containerID="c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.812558 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9"} err="failed to get container status \"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9\": rpc error: code = NotFound desc = could not find container \"c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9\": container with ID starting with c23a7cdb7ba2bc4dd8a31e7a626293b4071074b823f1dc13fa2d0230155db0a9 not found: ID does not exist" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.812578 4736 scope.go:117] "RemoveContainer" containerID="6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e" Feb 14 11:27:09 crc kubenswrapper[4736]: E0214 11:27:09.813137 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e\": container with ID starting with 6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e not found: ID does not exist" containerID="6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.813179 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e"} err="failed to get container status \"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e\": rpc error: code = NotFound desc = could not find container \"6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e\": container with ID starting with 6c157191c3686c4a75fadb6cf954440d5191fdea4cceedbae0d97762a7a7751e not found: ID does not exist" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.813205 4736 scope.go:117] "RemoveContainer" containerID="fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2" Feb 14 11:27:09 crc kubenswrapper[4736]: E0214 11:27:09.814017 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2\": container with ID starting with fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2 not found: ID does not exist" containerID="fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2" Feb 14 11:27:09 crc kubenswrapper[4736]: I0214 11:27:09.814045 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2"} err="failed to get container status \"fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2\": rpc error: code = NotFound desc = could not find container \"fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2\": container with ID starting with fcd508d331cab51a71708db53729279a9369a2f2bd22ef1ecff84b4e34fd3df2 not found: ID does not exist" Feb 14 11:27:10 crc kubenswrapper[4736]: I0214 11:27:10.418038 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" path="/var/lib/kubelet/pods/02e87460-ca35-44e0-8c6c-c12ab62de7a0/volumes" Feb 14 11:28:35 crc kubenswrapper[4736]: I0214 11:28:35.455982 4736 generic.go:334] "Generic (PLEG): container finished" podID="8c44413d-b97e-45f6-80d1-71f5e489c4ac" containerID="85e6b5d4d781134ff354fee3575fd8a1f3fcca0699fabaeb1c11b3a604c6a6aa" exitCode=0 Feb 14 11:28:35 crc kubenswrapper[4736]: I0214 11:28:35.456065 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" event={"ID":"8c44413d-b97e-45f6-80d1-71f5e489c4ac","Type":"ContainerDied","Data":"85e6b5d4d781134ff354fee3575fd8a1f3fcca0699fabaeb1c11b3a604c6a6aa"} Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.886810 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.997012 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.997090 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.997225 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.997311 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:36 crc kubenswrapper[4736]: I0214 11:28:36.997370 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:36.997449 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:36.997503 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twdh9\" (UniqueName: \"kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9\") pod \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\" (UID: \"8c44413d-b97e-45f6-80d1-71f5e489c4ac\") " Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.002986 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9" (OuterVolumeSpecName: "kube-api-access-twdh9") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "kube-api-access-twdh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.005209 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.026372 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.032931 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.033531 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.041242 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.056896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory" (OuterVolumeSpecName: "inventory") pod "8c44413d-b97e-45f6-80d1-71f5e489c4ac" (UID: "8c44413d-b97e-45f6-80d1-71f5e489c4ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100522 4736 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100558 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100571 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100584 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100597 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100610 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twdh9\" (UniqueName: \"kubernetes.io/projected/8c44413d-b97e-45f6-80d1-71f5e489c4ac-kube-api-access-twdh9\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.100626 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c44413d-b97e-45f6-80d1-71f5e489c4ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.480407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" event={"ID":"8c44413d-b97e-45f6-80d1-71f5e489c4ac","Type":"ContainerDied","Data":"285038664b526995103bd6dfd075e6fe6e80cf5c72e78929a555d689d7af6801"} Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.480452 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285038664b526995103bd6dfd075e6fe6e80cf5c72e78929a555d689d7af6801" Feb 14 11:28:37 crc kubenswrapper[4736]: I0214 11:28:37.480529 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5" Feb 14 11:28:47 crc kubenswrapper[4736]: I0214 11:28:47.695589 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:28:47 crc kubenswrapper[4736]: I0214 11:28:47.696049 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.497585 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:04 crc kubenswrapper[4736]: E0214 11:29:04.498447 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="extract-utilities" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498459 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="extract-utilities" Feb 14 11:29:04 crc kubenswrapper[4736]: E0214 11:29:04.498476 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498482 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" Feb 14 11:29:04 crc kubenswrapper[4736]: E0214 11:29:04.498491 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="extract-content" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498497 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="extract-content" Feb 14 11:29:04 crc kubenswrapper[4736]: E0214 11:29:04.498516 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c44413d-b97e-45f6-80d1-71f5e489c4ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498524 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c44413d-b97e-45f6-80d1-71f5e489c4ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498695 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c44413d-b97e-45f6-80d1-71f5e489c4ac" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.498714 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e87460-ca35-44e0-8c6c-c12ab62de7a0" containerName="registry-server" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.499978 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.514651 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.637648 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276xs\" (UniqueName: \"kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.637690 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.637989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.740388 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276xs\" (UniqueName: \"kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.740437 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.740599 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.741094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.741409 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.771200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276xs\" (UniqueName: \"kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs\") pod \"redhat-marketplace-kd98w\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:04 crc kubenswrapper[4736]: I0214 11:29:04.817126 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:05 crc kubenswrapper[4736]: I0214 11:29:05.386316 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:05 crc kubenswrapper[4736]: I0214 11:29:05.755588 4736 generic.go:334] "Generic (PLEG): container finished" podID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerID="95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287" exitCode=0 Feb 14 11:29:05 crc kubenswrapper[4736]: I0214 11:29:05.755815 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerDied","Data":"95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287"} Feb 14 11:29:05 crc kubenswrapper[4736]: I0214 11:29:05.755921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerStarted","Data":"53cb2ae57a0481417713c5a288c7105c14586d7323309e142b9d964c22bc03e1"} Feb 14 11:29:05 crc kubenswrapper[4736]: I0214 11:29:05.757487 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:29:06 crc kubenswrapper[4736]: I0214 11:29:06.766693 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerStarted","Data":"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7"} Feb 14 11:29:07 crc kubenswrapper[4736]: I0214 11:29:07.778316 4736 generic.go:334] "Generic (PLEG): container finished" podID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerID="6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7" exitCode=0 Feb 14 11:29:07 crc kubenswrapper[4736]: I0214 11:29:07.778358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerDied","Data":"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7"} Feb 14 11:29:08 crc kubenswrapper[4736]: I0214 11:29:08.789784 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerStarted","Data":"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd"} Feb 14 11:29:08 crc kubenswrapper[4736]: I0214 11:29:08.827544 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kd98w" podStartSLOduration=2.405687061 podStartE2EDuration="4.827527491s" podCreationTimestamp="2026-02-14 11:29:04 +0000 UTC" firstStartedPulling="2026-02-14 11:29:05.757301828 +0000 UTC m=+2856.125929196" lastFinishedPulling="2026-02-14 11:29:08.179142248 +0000 UTC m=+2858.547769626" observedRunningTime="2026-02-14 11:29:08.821032976 +0000 UTC m=+2859.189660344" watchObservedRunningTime="2026-02-14 11:29:08.827527491 +0000 UTC m=+2859.196154859" Feb 14 11:29:14 crc kubenswrapper[4736]: I0214 11:29:14.817735 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:14 crc kubenswrapper[4736]: I0214 11:29:14.819196 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:14 crc kubenswrapper[4736]: I0214 11:29:14.879733 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:14 crc kubenswrapper[4736]: I0214 11:29:14.930374 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:15 crc kubenswrapper[4736]: I0214 11:29:15.137185 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:16 crc kubenswrapper[4736]: I0214 11:29:16.858333 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kd98w" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="registry-server" containerID="cri-o://98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd" gracePeriod=2 Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.332051 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.381295 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content\") pod \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.381339 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities\") pod \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.381457 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276xs\" (UniqueName: \"kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs\") pod \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\" (UID: \"17e47c17-7ff7-4082-b4e7-56d8e40533c3\") " Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.382680 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities" (OuterVolumeSpecName: "utilities") pod "17e47c17-7ff7-4082-b4e7-56d8e40533c3" (UID: "17e47c17-7ff7-4082-b4e7-56d8e40533c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.400940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs" (OuterVolumeSpecName: "kube-api-access-276xs") pod "17e47c17-7ff7-4082-b4e7-56d8e40533c3" (UID: "17e47c17-7ff7-4082-b4e7-56d8e40533c3"). InnerVolumeSpecName "kube-api-access-276xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.411350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17e47c17-7ff7-4082-b4e7-56d8e40533c3" (UID: "17e47c17-7ff7-4082-b4e7-56d8e40533c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.483858 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.484222 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e47c17-7ff7-4082-b4e7-56d8e40533c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.484297 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276xs\" (UniqueName: \"kubernetes.io/projected/17e47c17-7ff7-4082-b4e7-56d8e40533c3-kube-api-access-276xs\") on node \"crc\" DevicePath \"\"" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.695218 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.695565 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.870566 4736 generic.go:334] "Generic (PLEG): container finished" podID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerID="98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd" exitCode=0 Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.870611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerDied","Data":"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd"} Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.870664 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kd98w" event={"ID":"17e47c17-7ff7-4082-b4e7-56d8e40533c3","Type":"ContainerDied","Data":"53cb2ae57a0481417713c5a288c7105c14586d7323309e142b9d964c22bc03e1"} Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.870713 4736 scope.go:117] "RemoveContainer" containerID="98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.870738 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kd98w" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.903085 4736 scope.go:117] "RemoveContainer" containerID="6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.917512 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.949614 4736 scope.go:117] "RemoveContainer" containerID="95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.952838 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kd98w"] Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.993863 4736 scope.go:117] "RemoveContainer" containerID="98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd" Feb 14 11:29:17 crc kubenswrapper[4736]: E0214 11:29:17.994552 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd\": container with ID starting with 98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd not found: ID does not exist" containerID="98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.994603 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd"} err="failed to get container status \"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd\": rpc error: code = NotFound desc = could not find container \"98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd\": container with ID starting with 98e9fefc0b4acea026f606021d2516b153fd614ec0d14593973666ee1b1b8cdd not found: ID does not exist" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.994635 4736 scope.go:117] "RemoveContainer" containerID="6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7" Feb 14 11:29:17 crc kubenswrapper[4736]: E0214 11:29:17.995111 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7\": container with ID starting with 6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7 not found: ID does not exist" containerID="6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.995142 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7"} err="failed to get container status \"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7\": rpc error: code = NotFound desc = could not find container \"6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7\": container with ID starting with 6b4bbff77ffd15ab80807ac58247353374e256cb0964a089331c2ee823dd7ce7 not found: ID does not exist" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.995162 4736 scope.go:117] "RemoveContainer" containerID="95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287" Feb 14 11:29:17 crc kubenswrapper[4736]: E0214 11:29:17.995520 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287\": container with ID starting with 95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287 not found: ID does not exist" containerID="95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287" Feb 14 11:29:17 crc kubenswrapper[4736]: I0214 11:29:17.995554 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287"} err="failed to get container status \"95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287\": rpc error: code = NotFound desc = could not find container \"95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287\": container with ID starting with 95d49a3d736ee6de7fe0c2bfe6781e610dd370e7abcaece72886659cf7160287 not found: ID does not exist" Feb 14 11:29:18 crc kubenswrapper[4736]: I0214 11:29:18.419076 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" path="/var/lib/kubelet/pods/17e47c17-7ff7-4082-b4e7-56d8e40533c3/volumes" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.251560 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 11:29:37 crc kubenswrapper[4736]: E0214 11:29:37.252691 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="registry-server" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.252708 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="registry-server" Feb 14 11:29:37 crc kubenswrapper[4736]: E0214 11:29:37.252725 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="extract-content" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.252759 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="extract-content" Feb 14 11:29:37 crc kubenswrapper[4736]: E0214 11:29:37.252784 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="extract-utilities" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.252793 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="extract-utilities" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.253048 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e47c17-7ff7-4082-b4e7-56d8e40533c3" containerName="registry-server" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.253795 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.256718 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.256977 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.256990 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-m5rsw" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.257810 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.268436 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419333 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419529 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419595 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419614 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419730 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419777 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419832 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt9x\" (UniqueName: \"kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.419857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.521351 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522183 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522243 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522301 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522356 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkt9x\" (UniqueName: \"kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522482 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522594 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.522773 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.523917 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.525403 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.525959 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.526459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.528129 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.528579 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.532495 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.548424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkt9x\" (UniqueName: \"kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.554760 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " pod="openstack/tempest-tests-tempest" Feb 14 11:29:37 crc kubenswrapper[4736]: I0214 11:29:37.580231 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 11:29:38 crc kubenswrapper[4736]: I0214 11:29:38.054931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 11:29:38 crc kubenswrapper[4736]: I0214 11:29:38.069163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab2bcae4-a5d8-471d-a031-b0e810759ab1","Type":"ContainerStarted","Data":"919f2dd88d220755b40df4665a1818a1bf39d6c3decc815a6911eb17332c3c74"} Feb 14 11:29:47 crc kubenswrapper[4736]: I0214 11:29:47.696834 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:29:47 crc kubenswrapper[4736]: I0214 11:29:47.698578 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:29:47 crc kubenswrapper[4736]: I0214 11:29:47.698640 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:29:47 crc kubenswrapper[4736]: I0214 11:29:47.699665 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:29:47 crc kubenswrapper[4736]: I0214 11:29:47.699839 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe" gracePeriod=600 Feb 14 11:29:48 crc kubenswrapper[4736]: I0214 11:29:48.162631 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe" exitCode=0 Feb 14 11:29:48 crc kubenswrapper[4736]: I0214 11:29:48.162678 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe"} Feb 14 11:29:48 crc kubenswrapper[4736]: I0214 11:29:48.162734 4736 scope.go:117] "RemoveContainer" containerID="c1c16bd1f104461c71c043765b406789f90ce9e6d7e3af53b95aa0d34945238a" Feb 14 11:29:49 crc kubenswrapper[4736]: I0214 11:29:49.185532 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1"} Feb 14 11:29:49 crc kubenswrapper[4736]: I0214 11:29:49.857536 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:29:49 crc kubenswrapper[4736]: I0214 11:29:49.859614 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:49 crc kubenswrapper[4736]: I0214 11:29:49.887892 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.013496 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.013585 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25f6n\" (UniqueName: \"kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.013675 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.119286 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25f6n\" (UniqueName: \"kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.119714 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.119806 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.120416 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.120735 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.153831 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25f6n\" (UniqueName: \"kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n\") pod \"community-operators-swq57\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " pod="openshift-marketplace/community-operators-swq57" Feb 14 11:29:50 crc kubenswrapper[4736]: I0214 11:29:50.183265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.173179 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8"] Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.175122 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.178106 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.184044 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.192773 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8"] Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.351422 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.351499 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.351648 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4cdb\" (UniqueName: \"kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.453165 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4cdb\" (UniqueName: \"kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.453290 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.453330 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.454510 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.466621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.473322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4cdb\" (UniqueName: \"kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb\") pod \"collect-profiles-29517810-n8wj8\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:00 crc kubenswrapper[4736]: I0214 11:30:00.500080 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:12 crc kubenswrapper[4736]: E0214 11:30:12.065283 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 14 11:30:12 crc kubenswrapper[4736]: E0214 11:30:12.066493 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(ab2bcae4-a5d8-471d-a031-b0e810759ab1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 11:30:12 crc kubenswrapper[4736]: E0214 11:30:12.069172 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" Feb 14 11:30:12 crc kubenswrapper[4736]: E0214 11:30:12.421138 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" Feb 14 11:30:12 crc kubenswrapper[4736]: I0214 11:30:12.554312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8"] Feb 14 11:30:12 crc kubenswrapper[4736]: I0214 11:30:12.614929 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:30:12 crc kubenswrapper[4736]: W0214 11:30:12.642042 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c8cd525_f393_4f32_915c_525c8445aed3.slice/crio-c8d44938208517eef56c50f0a78118240938540694dbb40ba4ee79caf221f2c3 WatchSource:0}: Error finding container c8d44938208517eef56c50f0a78118240938540694dbb40ba4ee79caf221f2c3: Status 404 returned error can't find the container with id c8d44938208517eef56c50f0a78118240938540694dbb40ba4ee79caf221f2c3 Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.428843 4736 generic.go:334] "Generic (PLEG): container finished" podID="0c8cd525-f393-4f32-915c-525c8445aed3" containerID="f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf" exitCode=0 Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.429135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerDied","Data":"f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf"} Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.429166 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerStarted","Data":"c8d44938208517eef56c50f0a78118240938540694dbb40ba4ee79caf221f2c3"} Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.433066 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" event={"ID":"a99cf860-17b3-4a41-92de-93cd2348436e","Type":"ContainerStarted","Data":"a4b8b788e7b03bfb17c8a4535d5e5102e2e39bc08f478e55127be71a16b703f0"} Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.433111 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" event={"ID":"a99cf860-17b3-4a41-92de-93cd2348436e","Type":"ContainerStarted","Data":"18d883a9591054a91cc7cf768ee1acefd04a99fb25fa95e3e385a8954effd8fd"} Feb 14 11:30:13 crc kubenswrapper[4736]: I0214 11:30:13.499296 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" podStartSLOduration=13.499272401 podStartE2EDuration="13.499272401s" podCreationTimestamp="2026-02-14 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 11:30:13.482694549 +0000 UTC m=+2923.851321957" watchObservedRunningTime="2026-02-14 11:30:13.499272401 +0000 UTC m=+2923.867899779" Feb 14 11:30:14 crc kubenswrapper[4736]: I0214 11:30:14.446267 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerStarted","Data":"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050"} Feb 14 11:30:14 crc kubenswrapper[4736]: I0214 11:30:14.450947 4736 generic.go:334] "Generic (PLEG): container finished" podID="a99cf860-17b3-4a41-92de-93cd2348436e" containerID="a4b8b788e7b03bfb17c8a4535d5e5102e2e39bc08f478e55127be71a16b703f0" exitCode=0 Feb 14 11:30:14 crc kubenswrapper[4736]: I0214 11:30:14.450984 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" event={"ID":"a99cf860-17b3-4a41-92de-93cd2348436e","Type":"ContainerDied","Data":"a4b8b788e7b03bfb17c8a4535d5e5102e2e39bc08f478e55127be71a16b703f0"} Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.854412 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.972283 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4cdb\" (UniqueName: \"kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb\") pod \"a99cf860-17b3-4a41-92de-93cd2348436e\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.972356 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume\") pod \"a99cf860-17b3-4a41-92de-93cd2348436e\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.972419 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume\") pod \"a99cf860-17b3-4a41-92de-93cd2348436e\" (UID: \"a99cf860-17b3-4a41-92de-93cd2348436e\") " Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.973542 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume" (OuterVolumeSpecName: "config-volume") pod "a99cf860-17b3-4a41-92de-93cd2348436e" (UID: "a99cf860-17b3-4a41-92de-93cd2348436e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.979461 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb" (OuterVolumeSpecName: "kube-api-access-d4cdb") pod "a99cf860-17b3-4a41-92de-93cd2348436e" (UID: "a99cf860-17b3-4a41-92de-93cd2348436e"). InnerVolumeSpecName "kube-api-access-d4cdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:30:15 crc kubenswrapper[4736]: I0214 11:30:15.985994 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a99cf860-17b3-4a41-92de-93cd2348436e" (UID: "a99cf860-17b3-4a41-92de-93cd2348436e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.075300 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4cdb\" (UniqueName: \"kubernetes.io/projected/a99cf860-17b3-4a41-92de-93cd2348436e-kube-api-access-d4cdb\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.075341 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a99cf860-17b3-4a41-92de-93cd2348436e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.075351 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a99cf860-17b3-4a41-92de-93cd2348436e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.467843 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" event={"ID":"a99cf860-17b3-4a41-92de-93cd2348436e","Type":"ContainerDied","Data":"18d883a9591054a91cc7cf768ee1acefd04a99fb25fa95e3e385a8954effd8fd"} Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.468165 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d883a9591054a91cc7cf768ee1acefd04a99fb25fa95e3e385a8954effd8fd" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.467912 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517810-n8wj8" Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.959793 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657"] Feb 14 11:30:16 crc kubenswrapper[4736]: I0214 11:30:16.968525 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517765-pc657"] Feb 14 11:30:17 crc kubenswrapper[4736]: I0214 11:30:17.479407 4736 generic.go:334] "Generic (PLEG): container finished" podID="0c8cd525-f393-4f32-915c-525c8445aed3" containerID="963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050" exitCode=0 Feb 14 11:30:17 crc kubenswrapper[4736]: I0214 11:30:17.479463 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerDied","Data":"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050"} Feb 14 11:30:18 crc kubenswrapper[4736]: I0214 11:30:18.411707 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cadeff5-99fa-4350-ab0b-fafde7e713a1" path="/var/lib/kubelet/pods/4cadeff5-99fa-4350-ab0b-fafde7e713a1/volumes" Feb 14 11:30:18 crc kubenswrapper[4736]: I0214 11:30:18.488061 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerStarted","Data":"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630"} Feb 14 11:30:18 crc kubenswrapper[4736]: I0214 11:30:18.515160 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-swq57" podStartSLOduration=24.835562446 podStartE2EDuration="29.515130241s" podCreationTimestamp="2026-02-14 11:29:49 +0000 UTC" firstStartedPulling="2026-02-14 11:30:13.432063141 +0000 UTC m=+2923.800690519" lastFinishedPulling="2026-02-14 11:30:18.111630936 +0000 UTC m=+2928.480258314" observedRunningTime="2026-02-14 11:30:18.512822865 +0000 UTC m=+2928.881450253" watchObservedRunningTime="2026-02-14 11:30:18.515130241 +0000 UTC m=+2928.883757649" Feb 14 11:30:20 crc kubenswrapper[4736]: I0214 11:30:20.184153 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:20 crc kubenswrapper[4736]: I0214 11:30:20.184772 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:21 crc kubenswrapper[4736]: I0214 11:30:21.232809 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-swq57" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="registry-server" probeResult="failure" output=< Feb 14 11:30:21 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:30:21 crc kubenswrapper[4736]: > Feb 14 11:30:25 crc kubenswrapper[4736]: I0214 11:30:25.562066 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab2bcae4-a5d8-471d-a031-b0e810759ab1","Type":"ContainerStarted","Data":"fb38cb3721465a5e54e2122776e466c5a383fbb6e19ead75e5ea306db4d2fe0a"} Feb 14 11:30:25 crc kubenswrapper[4736]: I0214 11:30:25.589780 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.8300641779999998 podStartE2EDuration="49.589718346s" podCreationTimestamp="2026-02-14 11:29:36 +0000 UTC" firstStartedPulling="2026-02-14 11:29:38.056485257 +0000 UTC m=+2888.425112625" lastFinishedPulling="2026-02-14 11:30:23.816139405 +0000 UTC m=+2934.184766793" observedRunningTime="2026-02-14 11:30:25.586071412 +0000 UTC m=+2935.954698820" watchObservedRunningTime="2026-02-14 11:30:25.589718346 +0000 UTC m=+2935.958345754" Feb 14 11:30:30 crc kubenswrapper[4736]: I0214 11:30:30.257448 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:30 crc kubenswrapper[4736]: I0214 11:30:30.334824 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:30 crc kubenswrapper[4736]: I0214 11:30:30.497846 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:30:31 crc kubenswrapper[4736]: I0214 11:30:31.615334 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-swq57" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="registry-server" containerID="cri-o://431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630" gracePeriod=2 Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.109789 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.195757 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25f6n\" (UniqueName: \"kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n\") pod \"0c8cd525-f393-4f32-915c-525c8445aed3\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.196086 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities\") pod \"0c8cd525-f393-4f32-915c-525c8445aed3\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.196148 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content\") pod \"0c8cd525-f393-4f32-915c-525c8445aed3\" (UID: \"0c8cd525-f393-4f32-915c-525c8445aed3\") " Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.196708 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities" (OuterVolumeSpecName: "utilities") pod "0c8cd525-f393-4f32-915c-525c8445aed3" (UID: "0c8cd525-f393-4f32-915c-525c8445aed3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.209773 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n" (OuterVolumeSpecName: "kube-api-access-25f6n") pod "0c8cd525-f393-4f32-915c-525c8445aed3" (UID: "0c8cd525-f393-4f32-915c-525c8445aed3"). InnerVolumeSpecName "kube-api-access-25f6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.251617 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c8cd525-f393-4f32-915c-525c8445aed3" (UID: "0c8cd525-f393-4f32-915c-525c8445aed3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.298285 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.298325 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25f6n\" (UniqueName: \"kubernetes.io/projected/0c8cd525-f393-4f32-915c-525c8445aed3-kube-api-access-25f6n\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.298343 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8cd525-f393-4f32-915c-525c8445aed3-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.642448 4736 generic.go:334] "Generic (PLEG): container finished" podID="0c8cd525-f393-4f32-915c-525c8445aed3" containerID="431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630" exitCode=0 Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.642486 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swq57" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.642514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerDied","Data":"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630"} Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.643154 4736 scope.go:117] "RemoveContainer" containerID="431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.643075 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swq57" event={"ID":"0c8cd525-f393-4f32-915c-525c8445aed3","Type":"ContainerDied","Data":"c8d44938208517eef56c50f0a78118240938540694dbb40ba4ee79caf221f2c3"} Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.669224 4736 scope.go:117] "RemoveContainer" containerID="963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.693479 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.704871 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-swq57"] Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.707788 4736 scope.go:117] "RemoveContainer" containerID="f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.741116 4736 scope.go:117] "RemoveContainer" containerID="431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630" Feb 14 11:30:32 crc kubenswrapper[4736]: E0214 11:30:32.741535 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630\": container with ID starting with 431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630 not found: ID does not exist" containerID="431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.741589 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630"} err="failed to get container status \"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630\": rpc error: code = NotFound desc = could not find container \"431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630\": container with ID starting with 431c6019dfbbdeeee06f2a10a7fcd0afeb2754df627744a9d96809413e442630 not found: ID does not exist" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.741618 4736 scope.go:117] "RemoveContainer" containerID="963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050" Feb 14 11:30:32 crc kubenswrapper[4736]: E0214 11:30:32.741953 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050\": container with ID starting with 963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050 not found: ID does not exist" containerID="963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.741972 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050"} err="failed to get container status \"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050\": rpc error: code = NotFound desc = could not find container \"963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050\": container with ID starting with 963965570f8659be6c9f0f5c5dbf38442806292795654c98b828d12c6a8e6050 not found: ID does not exist" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.741987 4736 scope.go:117] "RemoveContainer" containerID="f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf" Feb 14 11:30:32 crc kubenswrapper[4736]: E0214 11:30:32.742183 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf\": container with ID starting with f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf not found: ID does not exist" containerID="f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf" Feb 14 11:30:32 crc kubenswrapper[4736]: I0214 11:30:32.742207 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf"} err="failed to get container status \"f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf\": rpc error: code = NotFound desc = could not find container \"f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf\": container with ID starting with f273c8f1bc9a3a79328c7e3609db5e167c7006981b8fc8dfa8aa0491d01dbcaf not found: ID does not exist" Feb 14 11:30:34 crc kubenswrapper[4736]: I0214 11:30:34.411157 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" path="/var/lib/kubelet/pods/0c8cd525-f393-4f32-915c-525c8445aed3/volumes" Feb 14 11:30:36 crc kubenswrapper[4736]: I0214 11:30:36.562870 4736 scope.go:117] "RemoveContainer" containerID="95a8a0afd3edfd803943af2c83cbf15349fb0631a93de2b614310bfcadd17ecd" Feb 14 11:32:17 crc kubenswrapper[4736]: I0214 11:32:17.695044 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:32:17 crc kubenswrapper[4736]: I0214 11:32:17.695650 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:32:47 crc kubenswrapper[4736]: I0214 11:32:47.695319 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:32:47 crc kubenswrapper[4736]: I0214 11:32:47.696111 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:33:17 crc kubenswrapper[4736]: I0214 11:33:17.695570 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:33:17 crc kubenswrapper[4736]: I0214 11:33:17.696181 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:33:17 crc kubenswrapper[4736]: I0214 11:33:17.696238 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:33:17 crc kubenswrapper[4736]: I0214 11:33:17.697179 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:33:17 crc kubenswrapper[4736]: I0214 11:33:17.697244 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" gracePeriod=600 Feb 14 11:33:17 crc kubenswrapper[4736]: E0214 11:33:17.888190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:33:18 crc kubenswrapper[4736]: I0214 11:33:18.177678 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" exitCode=0 Feb 14 11:33:18 crc kubenswrapper[4736]: I0214 11:33:18.177769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1"} Feb 14 11:33:18 crc kubenswrapper[4736]: I0214 11:33:18.177974 4736 scope.go:117] "RemoveContainer" containerID="f1012c2102e430af7b7f304830c25f04cef76de4e9ba4ae33fc9d311348a7bbe" Feb 14 11:33:18 crc kubenswrapper[4736]: I0214 11:33:18.178687 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:33:18 crc kubenswrapper[4736]: E0214 11:33:18.179028 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:33:29 crc kubenswrapper[4736]: I0214 11:33:29.398034 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:33:29 crc kubenswrapper[4736]: E0214 11:33:29.398970 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:33:42 crc kubenswrapper[4736]: I0214 11:33:42.397243 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:33:42 crc kubenswrapper[4736]: E0214 11:33:42.398100 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:33:56 crc kubenswrapper[4736]: I0214 11:33:56.397792 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:33:56 crc kubenswrapper[4736]: E0214 11:33:56.398388 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:34:07 crc kubenswrapper[4736]: I0214 11:34:07.397299 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:34:07 crc kubenswrapper[4736]: E0214 11:34:07.398364 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:34:21 crc kubenswrapper[4736]: I0214 11:34:21.397379 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:34:21 crc kubenswrapper[4736]: E0214 11:34:21.398118 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:34:33 crc kubenswrapper[4736]: I0214 11:34:33.398026 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:34:33 crc kubenswrapper[4736]: E0214 11:34:33.399011 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:34:46 crc kubenswrapper[4736]: I0214 11:34:46.402657 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:34:46 crc kubenswrapper[4736]: E0214 11:34:46.405507 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:34:58 crc kubenswrapper[4736]: I0214 11:34:58.398214 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:34:58 crc kubenswrapper[4736]: E0214 11:34:58.398852 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:35:09 crc kubenswrapper[4736]: I0214 11:35:09.397515 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:35:09 crc kubenswrapper[4736]: E0214 11:35:09.398497 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:35:22 crc kubenswrapper[4736]: I0214 11:35:22.397615 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:35:22 crc kubenswrapper[4736]: E0214 11:35:22.398493 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:35:34 crc kubenswrapper[4736]: I0214 11:35:34.398008 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:35:34 crc kubenswrapper[4736]: E0214 11:35:34.398897 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:35:49 crc kubenswrapper[4736]: I0214 11:35:49.397032 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:35:49 crc kubenswrapper[4736]: E0214 11:35:49.397807 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:36:04 crc kubenswrapper[4736]: I0214 11:36:04.398015 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:36:04 crc kubenswrapper[4736]: E0214 11:36:04.398852 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.117468 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:13 crc kubenswrapper[4736]: E0214 11:36:13.118303 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99cf860-17b3-4a41-92de-93cd2348436e" containerName="collect-profiles" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118317 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99cf860-17b3-4a41-92de-93cd2348436e" containerName="collect-profiles" Feb 14 11:36:13 crc kubenswrapper[4736]: E0214 11:36:13.118326 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="extract-content" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118332 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="extract-content" Feb 14 11:36:13 crc kubenswrapper[4736]: E0214 11:36:13.118348 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="registry-server" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118354 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="registry-server" Feb 14 11:36:13 crc kubenswrapper[4736]: E0214 11:36:13.118366 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="extract-utilities" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118372 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="extract-utilities" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118547 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99cf860-17b3-4a41-92de-93cd2348436e" containerName="collect-profiles" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.118556 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8cd525-f393-4f32-915c-525c8445aed3" containerName="registry-server" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.119943 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.180504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.180569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.180627 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.192640 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.283993 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.284068 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.284137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.285013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.285098 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.313187 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn\") pod \"certified-operators-9pnsc\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:13 crc kubenswrapper[4736]: I0214 11:36:13.438183 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:14 crc kubenswrapper[4736]: I0214 11:36:14.668849 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:15 crc kubenswrapper[4736]: I0214 11:36:15.423708 4736 generic.go:334] "Generic (PLEG): container finished" podID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerID="a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78" exitCode=0 Feb 14 11:36:15 crc kubenswrapper[4736]: I0214 11:36:15.424049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerDied","Data":"a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78"} Feb 14 11:36:15 crc kubenswrapper[4736]: I0214 11:36:15.424089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerStarted","Data":"d90abd2f3ba999d89d2a5f800d3bfab93570833db4d417b84f81e3de033b51de"} Feb 14 11:36:15 crc kubenswrapper[4736]: I0214 11:36:15.427969 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:36:16 crc kubenswrapper[4736]: I0214 11:36:16.398148 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:36:16 crc kubenswrapper[4736]: E0214 11:36:16.399871 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:36:16 crc kubenswrapper[4736]: I0214 11:36:16.438272 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerStarted","Data":"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70"} Feb 14 11:36:18 crc kubenswrapper[4736]: I0214 11:36:18.458825 4736 generic.go:334] "Generic (PLEG): container finished" podID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerID="e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70" exitCode=0 Feb 14 11:36:18 crc kubenswrapper[4736]: I0214 11:36:18.459022 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerDied","Data":"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70"} Feb 14 11:36:19 crc kubenswrapper[4736]: I0214 11:36:19.469274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerStarted","Data":"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b"} Feb 14 11:36:19 crc kubenswrapper[4736]: I0214 11:36:19.495286 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pnsc" podStartSLOduration=2.838637025 podStartE2EDuration="6.495265806s" podCreationTimestamp="2026-02-14 11:36:13 +0000 UTC" firstStartedPulling="2026-02-14 11:36:15.42756721 +0000 UTC m=+3285.796194588" lastFinishedPulling="2026-02-14 11:36:19.084196001 +0000 UTC m=+3289.452823369" observedRunningTime="2026-02-14 11:36:19.491848199 +0000 UTC m=+3289.860475587" watchObservedRunningTime="2026-02-14 11:36:19.495265806 +0000 UTC m=+3289.863893164" Feb 14 11:36:23 crc kubenswrapper[4736]: I0214 11:36:23.438604 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:23 crc kubenswrapper[4736]: I0214 11:36:23.440451 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:23 crc kubenswrapper[4736]: I0214 11:36:23.495151 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:24 crc kubenswrapper[4736]: I0214 11:36:24.567915 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:24 crc kubenswrapper[4736]: I0214 11:36:24.625834 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:26 crc kubenswrapper[4736]: I0214 11:36:26.534764 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pnsc" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="registry-server" containerID="cri-o://cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b" gracePeriod=2 Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.215842 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.358546 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content\") pod \"7c9f937f-d347-4a5e-9b4d-c438381c0457\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.358762 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities\") pod \"7c9f937f-d347-4a5e-9b4d-c438381c0457\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.358834 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn\") pod \"7c9f937f-d347-4a5e-9b4d-c438381c0457\" (UID: \"7c9f937f-d347-4a5e-9b4d-c438381c0457\") " Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.361761 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities" (OuterVolumeSpecName: "utilities") pod "7c9f937f-d347-4a5e-9b4d-c438381c0457" (UID: "7c9f937f-d347-4a5e-9b4d-c438381c0457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.365962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn" (OuterVolumeSpecName: "kube-api-access-6fdxn") pod "7c9f937f-d347-4a5e-9b4d-c438381c0457" (UID: "7c9f937f-d347-4a5e-9b4d-c438381c0457"). InnerVolumeSpecName "kube-api-access-6fdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.413959 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c9f937f-d347-4a5e-9b4d-c438381c0457" (UID: "7c9f937f-d347-4a5e-9b4d-c438381c0457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.486826 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/7c9f937f-d347-4a5e-9b4d-c438381c0457-kube-api-access-6fdxn\") on node \"crc\" DevicePath \"\"" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.486884 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.486912 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9f937f-d347-4a5e-9b4d-c438381c0457-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.545472 4736 generic.go:334] "Generic (PLEG): container finished" podID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerID="cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b" exitCode=0 Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.545521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerDied","Data":"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b"} Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.545530 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pnsc" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.545547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pnsc" event={"ID":"7c9f937f-d347-4a5e-9b4d-c438381c0457","Type":"ContainerDied","Data":"d90abd2f3ba999d89d2a5f800d3bfab93570833db4d417b84f81e3de033b51de"} Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.545566 4736 scope.go:117] "RemoveContainer" containerID="cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.563465 4736 scope.go:117] "RemoveContainer" containerID="e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.580973 4736 scope.go:117] "RemoveContainer" containerID="a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.598794 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.606540 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pnsc"] Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.632036 4736 scope.go:117] "RemoveContainer" containerID="cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b" Feb 14 11:36:27 crc kubenswrapper[4736]: E0214 11:36:27.632451 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b\": container with ID starting with cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b not found: ID does not exist" containerID="cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.632514 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b"} err="failed to get container status \"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b\": rpc error: code = NotFound desc = could not find container \"cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b\": container with ID starting with cb6dbd14ce3f129da23144cef3753384e6c0c1eafd526515728905e20b1fe21b not found: ID does not exist" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.632549 4736 scope.go:117] "RemoveContainer" containerID="e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70" Feb 14 11:36:27 crc kubenswrapper[4736]: E0214 11:36:27.632972 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70\": container with ID starting with e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70 not found: ID does not exist" containerID="e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.633002 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70"} err="failed to get container status \"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70\": rpc error: code = NotFound desc = could not find container \"e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70\": container with ID starting with e622281a00979dffa46f8ba802ce04cffb45d3a992236da557aa367b62518d70 not found: ID does not exist" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.633024 4736 scope.go:117] "RemoveContainer" containerID="a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78" Feb 14 11:36:27 crc kubenswrapper[4736]: E0214 11:36:27.633368 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78\": container with ID starting with a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78 not found: ID does not exist" containerID="a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78" Feb 14 11:36:27 crc kubenswrapper[4736]: I0214 11:36:27.633390 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78"} err="failed to get container status \"a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78\": rpc error: code = NotFound desc = could not find container \"a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78\": container with ID starting with a0a9fe644854b168251f9ffd226dd2a5b6bdcb180ba42cd5c3b0706e63021d78 not found: ID does not exist" Feb 14 11:36:28 crc kubenswrapper[4736]: I0214 11:36:28.398318 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:36:28 crc kubenswrapper[4736]: E0214 11:36:28.398888 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:36:28 crc kubenswrapper[4736]: I0214 11:36:28.412094 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" path="/var/lib/kubelet/pods/7c9f937f-d347-4a5e-9b4d-c438381c0457/volumes" Feb 14 11:36:41 crc kubenswrapper[4736]: I0214 11:36:41.398055 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:36:41 crc kubenswrapper[4736]: E0214 11:36:41.398751 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:36:54 crc kubenswrapper[4736]: I0214 11:36:54.397169 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:36:54 crc kubenswrapper[4736]: E0214 11:36:54.398109 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.613524 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:02 crc kubenswrapper[4736]: E0214 11:37:02.614461 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="extract-utilities" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.614474 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="extract-utilities" Feb 14 11:37:02 crc kubenswrapper[4736]: E0214 11:37:02.614484 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="extract-content" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.614490 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="extract-content" Feb 14 11:37:02 crc kubenswrapper[4736]: E0214 11:37:02.614505 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="registry-server" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.614511 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="registry-server" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.614726 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9f937f-d347-4a5e-9b4d-c438381c0457" containerName="registry-server" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.615968 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.636891 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.782907 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m758\" (UniqueName: \"kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.782971 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.783682 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.885727 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m758\" (UniqueName: \"kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.885793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.885822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.886285 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.886562 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.905568 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m758\" (UniqueName: \"kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758\") pod \"redhat-operators-vkkg2\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:02 crc kubenswrapper[4736]: I0214 11:37:02.932316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:03 crc kubenswrapper[4736]: I0214 11:37:03.414618 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:03 crc kubenswrapper[4736]: I0214 11:37:03.855872 4736 generic.go:334] "Generic (PLEG): container finished" podID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerID="a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2" exitCode=0 Feb 14 11:37:03 crc kubenswrapper[4736]: I0214 11:37:03.855922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerDied","Data":"a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2"} Feb 14 11:37:03 crc kubenswrapper[4736]: I0214 11:37:03.856238 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerStarted","Data":"74679c3882dea3614fa681a816f392c0ef83ae2a6176adb9c512949d6c9243b1"} Feb 14 11:37:04 crc kubenswrapper[4736]: I0214 11:37:04.866035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerStarted","Data":"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548"} Feb 14 11:37:07 crc kubenswrapper[4736]: I0214 11:37:07.396759 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:37:07 crc kubenswrapper[4736]: E0214 11:37:07.397374 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:37:11 crc kubenswrapper[4736]: I0214 11:37:11.926491 4736 generic.go:334] "Generic (PLEG): container finished" podID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerID="26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548" exitCode=0 Feb 14 11:37:11 crc kubenswrapper[4736]: I0214 11:37:11.926568 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerDied","Data":"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548"} Feb 14 11:37:12 crc kubenswrapper[4736]: I0214 11:37:12.937855 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerStarted","Data":"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435"} Feb 14 11:37:12 crc kubenswrapper[4736]: I0214 11:37:12.963793 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkkg2" podStartSLOduration=2.233392288 podStartE2EDuration="10.963769897s" podCreationTimestamp="2026-02-14 11:37:02 +0000 UTC" firstStartedPulling="2026-02-14 11:37:03.857968235 +0000 UTC m=+3334.226595603" lastFinishedPulling="2026-02-14 11:37:12.588345824 +0000 UTC m=+3342.956973212" observedRunningTime="2026-02-14 11:37:12.960205616 +0000 UTC m=+3343.328832994" watchObservedRunningTime="2026-02-14 11:37:12.963769897 +0000 UTC m=+3343.332397275" Feb 14 11:37:21 crc kubenswrapper[4736]: I0214 11:37:21.397311 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:37:21 crc kubenswrapper[4736]: E0214 11:37:21.398168 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:37:22 crc kubenswrapper[4736]: I0214 11:37:22.932546 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:22 crc kubenswrapper[4736]: I0214 11:37:22.932896 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:23 crc kubenswrapper[4736]: I0214 11:37:23.991155 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vkkg2" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="registry-server" probeResult="failure" output=< Feb 14 11:37:23 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:37:23 crc kubenswrapper[4736]: > Feb 14 11:37:33 crc kubenswrapper[4736]: I0214 11:37:33.048286 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:33 crc kubenswrapper[4736]: I0214 11:37:33.110171 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:33 crc kubenswrapper[4736]: I0214 11:37:33.828127 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:34 crc kubenswrapper[4736]: I0214 11:37:34.098002 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vkkg2" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="registry-server" containerID="cri-o://f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435" gracePeriod=2 Feb 14 11:37:34 crc kubenswrapper[4736]: I0214 11:37:34.861886 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.046538 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities\") pod \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.046973 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m758\" (UniqueName: \"kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758\") pod \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.047194 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content\") pod \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\" (UID: \"a6a83439-d2dd-4bbf-b516-8aa86b3e2385\") " Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.047633 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities" (OuterVolumeSpecName: "utilities") pod "a6a83439-d2dd-4bbf-b516-8aa86b3e2385" (UID: "a6a83439-d2dd-4bbf-b516-8aa86b3e2385"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.047901 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.054953 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758" (OuterVolumeSpecName: "kube-api-access-8m758") pod "a6a83439-d2dd-4bbf-b516-8aa86b3e2385" (UID: "a6a83439-d2dd-4bbf-b516-8aa86b3e2385"). InnerVolumeSpecName "kube-api-access-8m758". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.124953 4736 generic.go:334] "Generic (PLEG): container finished" podID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerID="f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435" exitCode=0 Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.125009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerDied","Data":"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435"} Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.125038 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkkg2" event={"ID":"a6a83439-d2dd-4bbf-b516-8aa86b3e2385","Type":"ContainerDied","Data":"74679c3882dea3614fa681a816f392c0ef83ae2a6176adb9c512949d6c9243b1"} Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.125047 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkkg2" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.125057 4736 scope.go:117] "RemoveContainer" containerID="f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.151563 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m758\" (UniqueName: \"kubernetes.io/projected/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-kube-api-access-8m758\") on node \"crc\" DevicePath \"\"" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.154512 4736 scope.go:117] "RemoveContainer" containerID="26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.197282 4736 scope.go:117] "RemoveContainer" containerID="a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.215182 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6a83439-d2dd-4bbf-b516-8aa86b3e2385" (UID: "a6a83439-d2dd-4bbf-b516-8aa86b3e2385"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.252670 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a83439-d2dd-4bbf-b516-8aa86b3e2385-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.261876 4736 scope.go:117] "RemoveContainer" containerID="f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435" Feb 14 11:37:35 crc kubenswrapper[4736]: E0214 11:37:35.262288 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435\": container with ID starting with f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435 not found: ID does not exist" containerID="f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.262326 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435"} err="failed to get container status \"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435\": rpc error: code = NotFound desc = could not find container \"f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435\": container with ID starting with f116459367a688a7c23043fbf1a033ef72ca50fb418a46332d983cee8224d435 not found: ID does not exist" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.262350 4736 scope.go:117] "RemoveContainer" containerID="26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548" Feb 14 11:37:35 crc kubenswrapper[4736]: E0214 11:37:35.262554 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548\": container with ID starting with 26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548 not found: ID does not exist" containerID="26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.262579 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548"} err="failed to get container status \"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548\": rpc error: code = NotFound desc = could not find container \"26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548\": container with ID starting with 26f76ba74557bdd0875ce27f800ca5d2027bbfcbac76f01bd2b5e549dc6e2548 not found: ID does not exist" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.262594 4736 scope.go:117] "RemoveContainer" containerID="a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2" Feb 14 11:37:35 crc kubenswrapper[4736]: E0214 11:37:35.262854 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2\": container with ID starting with a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2 not found: ID does not exist" containerID="a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.262885 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2"} err="failed to get container status \"a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2\": rpc error: code = NotFound desc = could not find container \"a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2\": container with ID starting with a965dad3ee330aaa835e30c2803f3dffae10732793ad8765f11c75594a117dd2 not found: ID does not exist" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.397837 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:37:35 crc kubenswrapper[4736]: E0214 11:37:35.398105 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.480973 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:35 crc kubenswrapper[4736]: I0214 11:37:35.490255 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vkkg2"] Feb 14 11:37:36 crc kubenswrapper[4736]: I0214 11:37:36.409989 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" path="/var/lib/kubelet/pods/a6a83439-d2dd-4bbf-b516-8aa86b3e2385/volumes" Feb 14 11:37:49 crc kubenswrapper[4736]: I0214 11:37:49.397312 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:37:49 crc kubenswrapper[4736]: E0214 11:37:49.398139 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:38:01 crc kubenswrapper[4736]: I0214 11:38:01.397800 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:38:01 crc kubenswrapper[4736]: E0214 11:38:01.398715 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:38:13 crc kubenswrapper[4736]: I0214 11:38:13.397996 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:38:13 crc kubenswrapper[4736]: E0214 11:38:13.398894 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:38:24 crc kubenswrapper[4736]: I0214 11:38:24.397646 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:38:25 crc kubenswrapper[4736]: I0214 11:38:25.594368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e"} Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.268191 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:39:49 crc kubenswrapper[4736]: E0214 11:39:49.268973 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="extract-utilities" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.268987 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="extract-utilities" Feb 14 11:39:49 crc kubenswrapper[4736]: E0214 11:39:49.268998 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="registry-server" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.269004 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="registry-server" Feb 14 11:39:49 crc kubenswrapper[4736]: E0214 11:39:49.269021 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="extract-content" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.269028 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="extract-content" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.269213 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a83439-d2dd-4bbf-b516-8aa86b3e2385" containerName="registry-server" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.270611 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.286675 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.368767 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.368957 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.369003 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2qf\" (UniqueName: \"kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.471041 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.471170 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2qf\" (UniqueName: \"kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.471380 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.471644 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.472666 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.493473 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2qf\" (UniqueName: \"kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf\") pod \"redhat-marketplace-m469p\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:49 crc kubenswrapper[4736]: I0214 11:39:49.590191 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:50 crc kubenswrapper[4736]: I0214 11:39:50.090973 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:39:50 crc kubenswrapper[4736]: I0214 11:39:50.598037 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerID="660e51fad8570bde3684fd914495a4a5eaf756cc62e3f2b2faeea6115be400ba" exitCode=0 Feb 14 11:39:50 crc kubenswrapper[4736]: I0214 11:39:50.598085 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerDied","Data":"660e51fad8570bde3684fd914495a4a5eaf756cc62e3f2b2faeea6115be400ba"} Feb 14 11:39:50 crc kubenswrapper[4736]: I0214 11:39:50.598114 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerStarted","Data":"1eaea3fbf480e640bdae186b21fe9c1b5f79929a4c6c5f77444de91136d87713"} Feb 14 11:39:51 crc kubenswrapper[4736]: I0214 11:39:51.608980 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerStarted","Data":"947e54cd3ab93e8de00ba3dd3fec2c9903f9d065d875e202a09d7092583341af"} Feb 14 11:39:52 crc kubenswrapper[4736]: I0214 11:39:52.621404 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerID="947e54cd3ab93e8de00ba3dd3fec2c9903f9d065d875e202a09d7092583341af" exitCode=0 Feb 14 11:39:52 crc kubenswrapper[4736]: I0214 11:39:52.621454 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerDied","Data":"947e54cd3ab93e8de00ba3dd3fec2c9903f9d065d875e202a09d7092583341af"} Feb 14 11:39:53 crc kubenswrapper[4736]: I0214 11:39:53.632105 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerStarted","Data":"c6daeea42e218d2a0230e92ab3353758e6fe57eccf3ab64afc7fc28a788567a8"} Feb 14 11:39:53 crc kubenswrapper[4736]: I0214 11:39:53.654382 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m469p" podStartSLOduration=2.235734158 podStartE2EDuration="4.654363654s" podCreationTimestamp="2026-02-14 11:39:49 +0000 UTC" firstStartedPulling="2026-02-14 11:39:50.599573728 +0000 UTC m=+3500.968201106" lastFinishedPulling="2026-02-14 11:39:53.018203234 +0000 UTC m=+3503.386830602" observedRunningTime="2026-02-14 11:39:53.650083633 +0000 UTC m=+3504.018711011" watchObservedRunningTime="2026-02-14 11:39:53.654363654 +0000 UTC m=+3504.022991022" Feb 14 11:39:59 crc kubenswrapper[4736]: I0214 11:39:59.590893 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:59 crc kubenswrapper[4736]: I0214 11:39:59.592440 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:59 crc kubenswrapper[4736]: I0214 11:39:59.656661 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:59 crc kubenswrapper[4736]: I0214 11:39:59.752313 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:39:59 crc kubenswrapper[4736]: I0214 11:39:59.898180 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:40:01 crc kubenswrapper[4736]: I0214 11:40:01.710466 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m469p" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="registry-server" containerID="cri-o://c6daeea42e218d2a0230e92ab3353758e6fe57eccf3ab64afc7fc28a788567a8" gracePeriod=2 Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.722468 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerID="c6daeea42e218d2a0230e92ab3353758e6fe57eccf3ab64afc7fc28a788567a8" exitCode=0 Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.722511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerDied","Data":"c6daeea42e218d2a0230e92ab3353758e6fe57eccf3ab64afc7fc28a788567a8"} Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.723114 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m469p" event={"ID":"dbc72dc9-5881-46b1-a192-3c33429a9326","Type":"ContainerDied","Data":"1eaea3fbf480e640bdae186b21fe9c1b5f79929a4c6c5f77444de91136d87713"} Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.723127 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eaea3fbf480e640bdae186b21fe9c1b5f79929a4c6c5f77444de91136d87713" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.786625 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.874008 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities\") pod \"dbc72dc9-5881-46b1-a192-3c33429a9326\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.874335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content\") pod \"dbc72dc9-5881-46b1-a192-3c33429a9326\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.874564 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2qf\" (UniqueName: \"kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf\") pod \"dbc72dc9-5881-46b1-a192-3c33429a9326\" (UID: \"dbc72dc9-5881-46b1-a192-3c33429a9326\") " Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.874976 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities" (OuterVolumeSpecName: "utilities") pod "dbc72dc9-5881-46b1-a192-3c33429a9326" (UID: "dbc72dc9-5881-46b1-a192-3c33429a9326"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.884061 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf" (OuterVolumeSpecName: "kube-api-access-2w2qf") pod "dbc72dc9-5881-46b1-a192-3c33429a9326" (UID: "dbc72dc9-5881-46b1-a192-3c33429a9326"). InnerVolumeSpecName "kube-api-access-2w2qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.899066 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbc72dc9-5881-46b1-a192-3c33429a9326" (UID: "dbc72dc9-5881-46b1-a192-3c33429a9326"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.975567 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2qf\" (UniqueName: \"kubernetes.io/projected/dbc72dc9-5881-46b1-a192-3c33429a9326-kube-api-access-2w2qf\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.975600 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:02 crc kubenswrapper[4736]: I0214 11:40:02.975610 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc72dc9-5881-46b1-a192-3c33429a9326-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:03 crc kubenswrapper[4736]: I0214 11:40:03.730277 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m469p" Feb 14 11:40:03 crc kubenswrapper[4736]: I0214 11:40:03.787776 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:40:03 crc kubenswrapper[4736]: I0214 11:40:03.801761 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m469p"] Feb 14 11:40:04 crc kubenswrapper[4736]: I0214 11:40:04.411067 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" path="/var/lib/kubelet/pods/dbc72dc9-5881-46b1-a192-3c33429a9326/volumes" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.503522 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:21 crc kubenswrapper[4736]: E0214 11:40:21.504545 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="extract-utilities" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.504565 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="extract-utilities" Feb 14 11:40:21 crc kubenswrapper[4736]: E0214 11:40:21.504601 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="registry-server" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.504611 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="registry-server" Feb 14 11:40:21 crc kubenswrapper[4736]: E0214 11:40:21.504626 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="extract-content" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.504635 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="extract-content" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.504881 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc72dc9-5881-46b1-a192-3c33429a9326" containerName="registry-server" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.506523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.516877 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.655815 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.657116 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85d5j\" (UniqueName: \"kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.657313 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.758578 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.758625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85d5j\" (UniqueName: \"kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.758707 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.759203 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.759372 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.777824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85d5j\" (UniqueName: \"kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j\") pod \"community-operators-bchpl\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:21 crc kubenswrapper[4736]: I0214 11:40:21.865848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:22 crc kubenswrapper[4736]: I0214 11:40:22.443828 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:22 crc kubenswrapper[4736]: I0214 11:40:22.934999 4736 generic.go:334] "Generic (PLEG): container finished" podID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerID="22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb" exitCode=0 Feb 14 11:40:22 crc kubenswrapper[4736]: I0214 11:40:22.935252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerDied","Data":"22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb"} Feb 14 11:40:22 crc kubenswrapper[4736]: I0214 11:40:22.935299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerStarted","Data":"7c6c21e14fc3a214bd95cf4813efac64151be422d86b6ff6bad3eaa952d03403"} Feb 14 11:40:23 crc kubenswrapper[4736]: I0214 11:40:23.946612 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerStarted","Data":"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743"} Feb 14 11:40:25 crc kubenswrapper[4736]: I0214 11:40:25.962213 4736 generic.go:334] "Generic (PLEG): container finished" podID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerID="a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743" exitCode=0 Feb 14 11:40:25 crc kubenswrapper[4736]: I0214 11:40:25.962386 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerDied","Data":"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743"} Feb 14 11:40:26 crc kubenswrapper[4736]: I0214 11:40:26.972384 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerStarted","Data":"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8"} Feb 14 11:40:26 crc kubenswrapper[4736]: I0214 11:40:26.996693 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bchpl" podStartSLOduration=2.498191899 podStartE2EDuration="5.99667227s" podCreationTimestamp="2026-02-14 11:40:21 +0000 UTC" firstStartedPulling="2026-02-14 11:40:22.946139606 +0000 UTC m=+3533.314766994" lastFinishedPulling="2026-02-14 11:40:26.444620007 +0000 UTC m=+3536.813247365" observedRunningTime="2026-02-14 11:40:26.993796398 +0000 UTC m=+3537.362423786" watchObservedRunningTime="2026-02-14 11:40:26.99667227 +0000 UTC m=+3537.365299648" Feb 14 11:40:31 crc kubenswrapper[4736]: I0214 11:40:31.866490 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:31 crc kubenswrapper[4736]: I0214 11:40:31.867087 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:31 crc kubenswrapper[4736]: I0214 11:40:31.940501 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:32 crc kubenswrapper[4736]: I0214 11:40:32.059362 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:32 crc kubenswrapper[4736]: I0214 11:40:32.179369 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.025651 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bchpl" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="registry-server" containerID="cri-o://ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8" gracePeriod=2 Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.790157 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.844515 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85d5j\" (UniqueName: \"kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j\") pod \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.844562 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities\") pod \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.844779 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content\") pod \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\" (UID: \"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb\") " Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.845973 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities" (OuterVolumeSpecName: "utilities") pod "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" (UID: "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.858145 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j" (OuterVolumeSpecName: "kube-api-access-85d5j") pod "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" (UID: "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb"). InnerVolumeSpecName "kube-api-access-85d5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.907048 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" (UID: "3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.947387 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.947423 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:34 crc kubenswrapper[4736]: I0214 11:40:34.947435 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85d5j\" (UniqueName: \"kubernetes.io/projected/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb-kube-api-access-85d5j\") on node \"crc\" DevicePath \"\"" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.036344 4736 generic.go:334] "Generic (PLEG): container finished" podID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerID="ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8" exitCode=0 Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.036396 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bchpl" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.036417 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerDied","Data":"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8"} Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.036758 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bchpl" event={"ID":"3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb","Type":"ContainerDied","Data":"7c6c21e14fc3a214bd95cf4813efac64151be422d86b6ff6bad3eaa952d03403"} Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.036777 4736 scope.go:117] "RemoveContainer" containerID="ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.064823 4736 scope.go:117] "RemoveContainer" containerID="a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.065369 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.083660 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bchpl"] Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.088022 4736 scope.go:117] "RemoveContainer" containerID="22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.143213 4736 scope.go:117] "RemoveContainer" containerID="ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8" Feb 14 11:40:35 crc kubenswrapper[4736]: E0214 11:40:35.143597 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8\": container with ID starting with ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8 not found: ID does not exist" containerID="ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.143624 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8"} err="failed to get container status \"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8\": rpc error: code = NotFound desc = could not find container \"ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8\": container with ID starting with ff9bddfb015b8a8c96df323391cc97482ba26301253d8cfbea83534ff16232d8 not found: ID does not exist" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.143643 4736 scope.go:117] "RemoveContainer" containerID="a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743" Feb 14 11:40:35 crc kubenswrapper[4736]: E0214 11:40:35.143870 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743\": container with ID starting with a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743 not found: ID does not exist" containerID="a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.143889 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743"} err="failed to get container status \"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743\": rpc error: code = NotFound desc = could not find container \"a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743\": container with ID starting with a39b399ab5d463a8d928ed60948c68c30ceb758c1b223b2b28da03394969e743 not found: ID does not exist" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.143903 4736 scope.go:117] "RemoveContainer" containerID="22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb" Feb 14 11:40:35 crc kubenswrapper[4736]: E0214 11:40:35.144077 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb\": container with ID starting with 22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb not found: ID does not exist" containerID="22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb" Feb 14 11:40:35 crc kubenswrapper[4736]: I0214 11:40:35.144090 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb"} err="failed to get container status \"22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb\": rpc error: code = NotFound desc = could not find container \"22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb\": container with ID starting with 22097ba05a82d9d9e520e9e22d853a995ba25252f430dce185709475c0342bcb not found: ID does not exist" Feb 14 11:40:36 crc kubenswrapper[4736]: I0214 11:40:36.408521 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" path="/var/lib/kubelet/pods/3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb/volumes" Feb 14 11:40:47 crc kubenswrapper[4736]: I0214 11:40:47.695448 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:40:47 crc kubenswrapper[4736]: I0214 11:40:47.695911 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:41:17 crc kubenswrapper[4736]: I0214 11:41:17.695833 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:41:17 crc kubenswrapper[4736]: I0214 11:41:17.696555 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:41:47 crc kubenswrapper[4736]: I0214 11:41:47.700419 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:41:47 crc kubenswrapper[4736]: I0214 11:41:47.701000 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:41:47 crc kubenswrapper[4736]: I0214 11:41:47.701049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:41:47 crc kubenswrapper[4736]: I0214 11:41:47.701762 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:41:47 crc kubenswrapper[4736]: I0214 11:41:47.701820 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e" gracePeriod=600 Feb 14 11:41:48 crc kubenswrapper[4736]: I0214 11:41:48.716565 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e" exitCode=0 Feb 14 11:41:48 crc kubenswrapper[4736]: I0214 11:41:48.716661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e"} Feb 14 11:41:48 crc kubenswrapper[4736]: I0214 11:41:48.717358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e"} Feb 14 11:41:48 crc kubenswrapper[4736]: I0214 11:41:48.717403 4736 scope.go:117] "RemoveContainer" containerID="0e5c94c1bbabfbc4e1357f1aff9d2fc7c68d77ce03ec720d5952a7fc949894c1" Feb 14 11:44:17 crc kubenswrapper[4736]: I0214 11:44:17.695808 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:44:17 crc kubenswrapper[4736]: I0214 11:44:17.696361 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:44:47 crc kubenswrapper[4736]: I0214 11:44:47.695315 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:44:47 crc kubenswrapper[4736]: I0214 11:44:47.695916 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.194797 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f"] Feb 14 11:45:00 crc kubenswrapper[4736]: E0214 11:45:00.195657 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="extract-utilities" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.195672 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="extract-utilities" Feb 14 11:45:00 crc kubenswrapper[4736]: E0214 11:45:00.195689 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="registry-server" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.195695 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="registry-server" Feb 14 11:45:00 crc kubenswrapper[4736]: E0214 11:45:00.195704 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="extract-content" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.195710 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="extract-content" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.195943 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd7bcc6-22d5-4e25-ac73-6f0cfece2efb" containerName="registry-server" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.196500 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.200711 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.200946 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.213296 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f"] Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.251916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tc4g\" (UniqueName: \"kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.252005 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.252081 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.353280 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tc4g\" (UniqueName: \"kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.353364 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.353445 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.354405 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.361373 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.369068 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tc4g\" (UniqueName: \"kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g\") pod \"collect-profiles-29517825-6s87f\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.517401 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:00 crc kubenswrapper[4736]: I0214 11:45:00.965017 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f"] Feb 14 11:45:00 crc kubenswrapper[4736]: W0214 11:45:00.967783 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e56406_2993_418c_91bf_0191ad7c115a.slice/crio-a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e WatchSource:0}: Error finding container a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e: Status 404 returned error can't find the container with id a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e Feb 14 11:45:01 crc kubenswrapper[4736]: I0214 11:45:01.623734 4736 generic.go:334] "Generic (PLEG): container finished" podID="97e56406-2993-418c-91bf-0191ad7c115a" containerID="93291a1d3586762ee05dd8ba758c642cf9c4cd92832b0ea104afd53576da4eea" exitCode=0 Feb 14 11:45:01 crc kubenswrapper[4736]: I0214 11:45:01.623872 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" event={"ID":"97e56406-2993-418c-91bf-0191ad7c115a","Type":"ContainerDied","Data":"93291a1d3586762ee05dd8ba758c642cf9c4cd92832b0ea104afd53576da4eea"} Feb 14 11:45:01 crc kubenswrapper[4736]: I0214 11:45:01.624156 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" event={"ID":"97e56406-2993-418c-91bf-0191ad7c115a","Type":"ContainerStarted","Data":"a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e"} Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.099781 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.209546 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tc4g\" (UniqueName: \"kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g\") pod \"97e56406-2993-418c-91bf-0191ad7c115a\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.209806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume\") pod \"97e56406-2993-418c-91bf-0191ad7c115a\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.209911 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume\") pod \"97e56406-2993-418c-91bf-0191ad7c115a\" (UID: \"97e56406-2993-418c-91bf-0191ad7c115a\") " Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.210618 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume" (OuterVolumeSpecName: "config-volume") pod "97e56406-2993-418c-91bf-0191ad7c115a" (UID: "97e56406-2993-418c-91bf-0191ad7c115a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.211009 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97e56406-2993-418c-91bf-0191ad7c115a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.215980 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g" (OuterVolumeSpecName: "kube-api-access-8tc4g") pod "97e56406-2993-418c-91bf-0191ad7c115a" (UID: "97e56406-2993-418c-91bf-0191ad7c115a"). InnerVolumeSpecName "kube-api-access-8tc4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.216426 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97e56406-2993-418c-91bf-0191ad7c115a" (UID: "97e56406-2993-418c-91bf-0191ad7c115a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.312220 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tc4g\" (UniqueName: \"kubernetes.io/projected/97e56406-2993-418c-91bf-0191ad7c115a-kube-api-access-8tc4g\") on node \"crc\" DevicePath \"\"" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.312276 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97e56406-2993-418c-91bf-0191ad7c115a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.645816 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" event={"ID":"97e56406-2993-418c-91bf-0191ad7c115a","Type":"ContainerDied","Data":"a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e"} Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.645863 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40db178d6728a27bbea0a5da51afdd4bb957768818646fad1c7d950fef5131e" Feb 14 11:45:03 crc kubenswrapper[4736]: I0214 11:45:03.645930 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517825-6s87f" Feb 14 11:45:04 crc kubenswrapper[4736]: I0214 11:45:04.184516 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7"] Feb 14 11:45:04 crc kubenswrapper[4736]: I0214 11:45:04.193000 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517780-b6ql7"] Feb 14 11:45:04 crc kubenswrapper[4736]: I0214 11:45:04.416997 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0205e213-3253-4b14-b645-18a0dfdfe4d3" path="/var/lib/kubelet/pods/0205e213-3253-4b14-b645-18a0dfdfe4d3/volumes" Feb 14 11:45:17 crc kubenswrapper[4736]: I0214 11:45:17.696034 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:45:17 crc kubenswrapper[4736]: I0214 11:45:17.696805 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:45:17 crc kubenswrapper[4736]: I0214 11:45:17.696906 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:45:17 crc kubenswrapper[4736]: I0214 11:45:17.698298 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:45:17 crc kubenswrapper[4736]: I0214 11:45:17.698428 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" gracePeriod=600 Feb 14 11:45:17 crc kubenswrapper[4736]: E0214 11:45:17.853709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:45:18 crc kubenswrapper[4736]: I0214 11:45:18.790818 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" exitCode=0 Feb 14 11:45:18 crc kubenswrapper[4736]: I0214 11:45:18.790866 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e"} Feb 14 11:45:18 crc kubenswrapper[4736]: I0214 11:45:18.790902 4736 scope.go:117] "RemoveContainer" containerID="8615ffb1b0e7f52f2bcf05a24a181afc7968eacc7eeb715ece525ec1bee5b06e" Feb 14 11:45:18 crc kubenswrapper[4736]: I0214 11:45:18.791541 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:45:18 crc kubenswrapper[4736]: E0214 11:45:18.791850 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:45:33 crc kubenswrapper[4736]: I0214 11:45:33.397673 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:45:33 crc kubenswrapper[4736]: E0214 11:45:33.398459 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:45:37 crc kubenswrapper[4736]: I0214 11:45:37.018711 4736 scope.go:117] "RemoveContainer" containerID="b9364a798d80b5477cfb1e4cb8b4556328faffd007702fd9ef3ce2ddf8ee8d5b" Feb 14 11:45:44 crc kubenswrapper[4736]: I0214 11:45:44.397827 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:45:44 crc kubenswrapper[4736]: E0214 11:45:44.399101 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:45:58 crc kubenswrapper[4736]: I0214 11:45:58.397330 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:45:58 crc kubenswrapper[4736]: E0214 11:45:58.398238 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:46:13 crc kubenswrapper[4736]: I0214 11:46:13.397056 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:46:13 crc kubenswrapper[4736]: E0214 11:46:13.397918 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:46:25 crc kubenswrapper[4736]: I0214 11:46:25.398395 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:46:25 crc kubenswrapper[4736]: E0214 11:46:25.399559 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:46:37 crc kubenswrapper[4736]: I0214 11:46:37.087597 4736 scope.go:117] "RemoveContainer" containerID="c6daeea42e218d2a0230e92ab3353758e6fe57eccf3ab64afc7fc28a788567a8" Feb 14 11:46:37 crc kubenswrapper[4736]: I0214 11:46:37.111785 4736 scope.go:117] "RemoveContainer" containerID="660e51fad8570bde3684fd914495a4a5eaf756cc62e3f2b2faeea6115be400ba" Feb 14 11:46:37 crc kubenswrapper[4736]: I0214 11:46:37.164999 4736 scope.go:117] "RemoveContainer" containerID="947e54cd3ab93e8de00ba3dd3fec2c9903f9d065d875e202a09d7092583341af" Feb 14 11:46:40 crc kubenswrapper[4736]: I0214 11:46:40.405181 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:46:40 crc kubenswrapper[4736]: E0214 11:46:40.405869 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.915341 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:44 crc kubenswrapper[4736]: E0214 11:46:44.916304 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e56406-2993-418c-91bf-0191ad7c115a" containerName="collect-profiles" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.916326 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e56406-2993-418c-91bf-0191ad7c115a" containerName="collect-profiles" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.916667 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e56406-2993-418c-91bf-0191ad7c115a" containerName="collect-profiles" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.918541 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.926596 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.983402 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fbn\" (UniqueName: \"kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.983437 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:44 crc kubenswrapper[4736]: I0214 11:46:44.983464 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.084881 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8fbn\" (UniqueName: \"kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.084949 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.084988 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.085459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.085627 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.188612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8fbn\" (UniqueName: \"kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn\") pod \"certified-operators-vh8b2\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.239403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:45 crc kubenswrapper[4736]: I0214 11:46:45.869515 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:46 crc kubenswrapper[4736]: I0214 11:46:46.613472 4736 generic.go:334] "Generic (PLEG): container finished" podID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerID="2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e" exitCode=0 Feb 14 11:46:46 crc kubenswrapper[4736]: I0214 11:46:46.613526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerDied","Data":"2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e"} Feb 14 11:46:46 crc kubenswrapper[4736]: I0214 11:46:46.614416 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerStarted","Data":"487af6682c079bba1105092ba0ae0fe9a57949311395693b41250fc572101764"} Feb 14 11:46:46 crc kubenswrapper[4736]: I0214 11:46:46.615154 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:46:47 crc kubenswrapper[4736]: I0214 11:46:47.625274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerStarted","Data":"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8"} Feb 14 11:46:49 crc kubenswrapper[4736]: I0214 11:46:49.646582 4736 generic.go:334] "Generic (PLEG): container finished" podID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerID="d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8" exitCode=0 Feb 14 11:46:49 crc kubenswrapper[4736]: I0214 11:46:49.646694 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerDied","Data":"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8"} Feb 14 11:46:50 crc kubenswrapper[4736]: I0214 11:46:50.657075 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerStarted","Data":"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee"} Feb 14 11:46:50 crc kubenswrapper[4736]: I0214 11:46:50.677279 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vh8b2" podStartSLOduration=3.216219444 podStartE2EDuration="6.677264026s" podCreationTimestamp="2026-02-14 11:46:44 +0000 UTC" firstStartedPulling="2026-02-14 11:46:46.61493517 +0000 UTC m=+3916.983562538" lastFinishedPulling="2026-02-14 11:46:50.075979742 +0000 UTC m=+3920.444607120" observedRunningTime="2026-02-14 11:46:50.673395226 +0000 UTC m=+3921.042022594" watchObservedRunningTime="2026-02-14 11:46:50.677264026 +0000 UTC m=+3921.045891384" Feb 14 11:46:52 crc kubenswrapper[4736]: I0214 11:46:52.397819 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:46:52 crc kubenswrapper[4736]: E0214 11:46:52.398273 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:46:55 crc kubenswrapper[4736]: I0214 11:46:55.239610 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:55 crc kubenswrapper[4736]: I0214 11:46:55.240247 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:55 crc kubenswrapper[4736]: I0214 11:46:55.766013 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:55 crc kubenswrapper[4736]: I0214 11:46:55.827799 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:56 crc kubenswrapper[4736]: I0214 11:46:56.003924 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:57 crc kubenswrapper[4736]: I0214 11:46:57.735205 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vh8b2" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="registry-server" containerID="cri-o://e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee" gracePeriod=2 Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.408099 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.550251 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8fbn\" (UniqueName: \"kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn\") pod \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.550382 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content\") pod \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.550495 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities\") pod \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\" (UID: \"b5ee64c7-a97b-496b-b11e-c24c5a009d37\") " Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.551899 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities" (OuterVolumeSpecName: "utilities") pod "b5ee64c7-a97b-496b-b11e-c24c5a009d37" (UID: "b5ee64c7-a97b-496b-b11e-c24c5a009d37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.556923 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn" (OuterVolumeSpecName: "kube-api-access-k8fbn") pod "b5ee64c7-a97b-496b-b11e-c24c5a009d37" (UID: "b5ee64c7-a97b-496b-b11e-c24c5a009d37"). InnerVolumeSpecName "kube-api-access-k8fbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.606097 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ee64c7-a97b-496b-b11e-c24c5a009d37" (UID: "b5ee64c7-a97b-496b-b11e-c24c5a009d37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.652151 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.652386 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8fbn\" (UniqueName: \"kubernetes.io/projected/b5ee64c7-a97b-496b-b11e-c24c5a009d37-kube-api-access-k8fbn\") on node \"crc\" DevicePath \"\"" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.652467 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ee64c7-a97b-496b-b11e-c24c5a009d37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.751497 4736 generic.go:334] "Generic (PLEG): container finished" podID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerID="e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee" exitCode=0 Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.751543 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerDied","Data":"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee"} Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.751571 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh8b2" event={"ID":"b5ee64c7-a97b-496b-b11e-c24c5a009d37","Type":"ContainerDied","Data":"487af6682c079bba1105092ba0ae0fe9a57949311395693b41250fc572101764"} Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.751589 4736 scope.go:117] "RemoveContainer" containerID="e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.751723 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh8b2" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.773340 4736 scope.go:117] "RemoveContainer" containerID="d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.801819 4736 scope.go:117] "RemoveContainer" containerID="2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.801863 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.820275 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vh8b2"] Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.862003 4736 scope.go:117] "RemoveContainer" containerID="e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee" Feb 14 11:46:58 crc kubenswrapper[4736]: E0214 11:46:58.862770 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee\": container with ID starting with e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee not found: ID does not exist" containerID="e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.862824 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee"} err="failed to get container status \"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee\": rpc error: code = NotFound desc = could not find container \"e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee\": container with ID starting with e1fdc98ed8a8d6cf26e323e34d7f83aac8ec9e0cc6c3d5e9c1fe5893e0b312ee not found: ID does not exist" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.862851 4736 scope.go:117] "RemoveContainer" containerID="d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8" Feb 14 11:46:58 crc kubenswrapper[4736]: E0214 11:46:58.863230 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8\": container with ID starting with d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8 not found: ID does not exist" containerID="d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.863269 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8"} err="failed to get container status \"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8\": rpc error: code = NotFound desc = could not find container \"d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8\": container with ID starting with d8df415615a52498117c46dbd55ab706a500a4a56595a3ef1e2d91836f377bf8 not found: ID does not exist" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.863298 4736 scope.go:117] "RemoveContainer" containerID="2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e" Feb 14 11:46:58 crc kubenswrapper[4736]: E0214 11:46:58.863575 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e\": container with ID starting with 2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e not found: ID does not exist" containerID="2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e" Feb 14 11:46:58 crc kubenswrapper[4736]: I0214 11:46:58.863608 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e"} err="failed to get container status \"2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e\": rpc error: code = NotFound desc = could not find container \"2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e\": container with ID starting with 2e17de13e14776f18fc940e8d3043275f563915e6dca45839adca4ea244d668e not found: ID does not exist" Feb 14 11:47:00 crc kubenswrapper[4736]: I0214 11:47:00.412963 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" path="/var/lib/kubelet/pods/b5ee64c7-a97b-496b-b11e-c24c5a009d37/volumes" Feb 14 11:47:04 crc kubenswrapper[4736]: I0214 11:47:04.398043 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:47:04 crc kubenswrapper[4736]: E0214 11:47:04.399148 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:47:17 crc kubenswrapper[4736]: I0214 11:47:17.397893 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:47:17 crc kubenswrapper[4736]: E0214 11:47:17.398777 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:47:30 crc kubenswrapper[4736]: I0214 11:47:30.413862 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:47:30 crc kubenswrapper[4736]: E0214 11:47:30.414786 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:47:44 crc kubenswrapper[4736]: I0214 11:47:44.396599 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:47:44 crc kubenswrapper[4736]: E0214 11:47:44.397508 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.541874 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:47:57 crc kubenswrapper[4736]: E0214 11:47:57.542768 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="extract-utilities" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.542779 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="extract-utilities" Feb 14 11:47:57 crc kubenswrapper[4736]: E0214 11:47:57.542796 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="extract-content" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.542802 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="extract-content" Feb 14 11:47:57 crc kubenswrapper[4736]: E0214 11:47:57.542812 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="registry-server" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.542817 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="registry-server" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.542995 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ee64c7-a97b-496b-b11e-c24c5a009d37" containerName="registry-server" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.547372 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.563376 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.663834 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.664146 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.664323 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvk5\" (UniqueName: \"kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.766316 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvk5\" (UniqueName: \"kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.766724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.767136 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.767259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.767531 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.789841 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvk5\" (UniqueName: \"kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5\") pod \"redhat-operators-dvz8j\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:57 crc kubenswrapper[4736]: I0214 11:47:57.905967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:47:58 crc kubenswrapper[4736]: I0214 11:47:58.397507 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:47:58 crc kubenswrapper[4736]: E0214 11:47:58.397943 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:47:58 crc kubenswrapper[4736]: I0214 11:47:58.413568 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:47:59 crc kubenswrapper[4736]: I0214 11:47:59.342361 4736 generic.go:334] "Generic (PLEG): container finished" podID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerID="d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b" exitCode=0 Feb 14 11:47:59 crc kubenswrapper[4736]: I0214 11:47:59.342458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerDied","Data":"d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b"} Feb 14 11:47:59 crc kubenswrapper[4736]: I0214 11:47:59.343138 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerStarted","Data":"1d0460c59eb2cb65c5581533c272bc41804b88d19e16e67e72dce8da3c76cb17"} Feb 14 11:48:00 crc kubenswrapper[4736]: I0214 11:48:00.352624 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerStarted","Data":"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2"} Feb 14 11:48:05 crc kubenswrapper[4736]: I0214 11:48:05.399446 4736 generic.go:334] "Generic (PLEG): container finished" podID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerID="1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2" exitCode=0 Feb 14 11:48:05 crc kubenswrapper[4736]: I0214 11:48:05.399483 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerDied","Data":"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2"} Feb 14 11:48:06 crc kubenswrapper[4736]: I0214 11:48:06.421156 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerStarted","Data":"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd"} Feb 14 11:48:06 crc kubenswrapper[4736]: I0214 11:48:06.446512 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dvz8j" podStartSLOduration=3.013545928 podStartE2EDuration="9.446488289s" podCreationTimestamp="2026-02-14 11:47:57 +0000 UTC" firstStartedPulling="2026-02-14 11:47:59.345406888 +0000 UTC m=+3989.714034256" lastFinishedPulling="2026-02-14 11:48:05.778349249 +0000 UTC m=+3996.146976617" observedRunningTime="2026-02-14 11:48:06.437315599 +0000 UTC m=+3996.805942967" watchObservedRunningTime="2026-02-14 11:48:06.446488289 +0000 UTC m=+3996.815115667" Feb 14 11:48:07 crc kubenswrapper[4736]: I0214 11:48:07.907088 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:07 crc kubenswrapper[4736]: I0214 11:48:07.907440 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:08 crc kubenswrapper[4736]: I0214 11:48:08.965513 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dvz8j" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="registry-server" probeResult="failure" output=< Feb 14 11:48:08 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:48:08 crc kubenswrapper[4736]: > Feb 14 11:48:12 crc kubenswrapper[4736]: I0214 11:48:12.397720 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:48:12 crc kubenswrapper[4736]: E0214 11:48:12.398426 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:48:17 crc kubenswrapper[4736]: I0214 11:48:17.963080 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:18 crc kubenswrapper[4736]: I0214 11:48:18.062403 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:18 crc kubenswrapper[4736]: I0214 11:48:18.200415 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:48:19 crc kubenswrapper[4736]: I0214 11:48:19.530642 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dvz8j" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="registry-server" containerID="cri-o://a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd" gracePeriod=2 Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.078063 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.221193 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content\") pod \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.221455 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbvk5\" (UniqueName: \"kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5\") pod \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.221481 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities\") pod \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\" (UID: \"97e8ed08-9b78-45be-a6a6-3cefda8fad3d\") " Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.223101 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities" (OuterVolumeSpecName: "utilities") pod "97e8ed08-9b78-45be-a6a6-3cefda8fad3d" (UID: "97e8ed08-9b78-45be-a6a6-3cefda8fad3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.234939 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5" (OuterVolumeSpecName: "kube-api-access-pbvk5") pod "97e8ed08-9b78-45be-a6a6-3cefda8fad3d" (UID: "97e8ed08-9b78-45be-a6a6-3cefda8fad3d"). InnerVolumeSpecName "kube-api-access-pbvk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.323710 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbvk5\" (UniqueName: \"kubernetes.io/projected/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-kube-api-access-pbvk5\") on node \"crc\" DevicePath \"\"" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.324020 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.361707 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97e8ed08-9b78-45be-a6a6-3cefda8fad3d" (UID: "97e8ed08-9b78-45be-a6a6-3cefda8fad3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.426529 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97e8ed08-9b78-45be-a6a6-3cefda8fad3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.540069 4736 generic.go:334] "Generic (PLEG): container finished" podID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerID="a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd" exitCode=0 Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.540115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerDied","Data":"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd"} Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.540141 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvz8j" event={"ID":"97e8ed08-9b78-45be-a6a6-3cefda8fad3d","Type":"ContainerDied","Data":"1d0460c59eb2cb65c5581533c272bc41804b88d19e16e67e72dce8da3c76cb17"} Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.540156 4736 scope.go:117] "RemoveContainer" containerID="a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.541177 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvz8j" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.565858 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.565954 4736 scope.go:117] "RemoveContainer" containerID="1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.581163 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dvz8j"] Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.612487 4736 scope.go:117] "RemoveContainer" containerID="d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.647502 4736 scope.go:117] "RemoveContainer" containerID="a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd" Feb 14 11:48:20 crc kubenswrapper[4736]: E0214 11:48:20.647986 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd\": container with ID starting with a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd not found: ID does not exist" containerID="a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.648018 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd"} err="failed to get container status \"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd\": rpc error: code = NotFound desc = could not find container \"a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd\": container with ID starting with a22f0604992b84c98bec3bc492754070e6ec63291c41d624703500e827fdbafd not found: ID does not exist" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.648044 4736 scope.go:117] "RemoveContainer" containerID="1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2" Feb 14 11:48:20 crc kubenswrapper[4736]: E0214 11:48:20.648465 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2\": container with ID starting with 1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2 not found: ID does not exist" containerID="1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.648493 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2"} err="failed to get container status \"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2\": rpc error: code = NotFound desc = could not find container \"1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2\": container with ID starting with 1dda49c5033a94b391b52eda7146f4f74a7baf69bc732d18812c3aa707929ee2 not found: ID does not exist" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.648512 4736 scope.go:117] "RemoveContainer" containerID="d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b" Feb 14 11:48:20 crc kubenswrapper[4736]: E0214 11:48:20.648731 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b\": container with ID starting with d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b not found: ID does not exist" containerID="d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b" Feb 14 11:48:20 crc kubenswrapper[4736]: I0214 11:48:20.648773 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b"} err="failed to get container status \"d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b\": rpc error: code = NotFound desc = could not find container \"d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b\": container with ID starting with d3db0e489fdfa13648bde8279689858fc59b9a54f794254d8c5f89b93ffeb33b not found: ID does not exist" Feb 14 11:48:22 crc kubenswrapper[4736]: I0214 11:48:22.412788 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" path="/var/lib/kubelet/pods/97e8ed08-9b78-45be-a6a6-3cefda8fad3d/volumes" Feb 14 11:48:25 crc kubenswrapper[4736]: I0214 11:48:25.397307 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:48:25 crc kubenswrapper[4736]: E0214 11:48:25.398001 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:48:40 crc kubenswrapper[4736]: I0214 11:48:40.405317 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:48:40 crc kubenswrapper[4736]: E0214 11:48:40.406121 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:48:54 crc kubenswrapper[4736]: I0214 11:48:54.398312 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:48:54 crc kubenswrapper[4736]: E0214 11:48:54.399051 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:49:06 crc kubenswrapper[4736]: I0214 11:49:06.399483 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:49:06 crc kubenswrapper[4736]: E0214 11:49:06.401374 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:49:18 crc kubenswrapper[4736]: I0214 11:49:18.397628 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:49:18 crc kubenswrapper[4736]: E0214 11:49:18.399183 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:49:29 crc kubenswrapper[4736]: I0214 11:49:29.397872 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:49:29 crc kubenswrapper[4736]: E0214 11:49:29.399917 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:49:40 crc kubenswrapper[4736]: I0214 11:49:40.410218 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:49:40 crc kubenswrapper[4736]: E0214 11:49:40.411242 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:49:51 crc kubenswrapper[4736]: I0214 11:49:51.397883 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:49:51 crc kubenswrapper[4736]: E0214 11:49:51.398721 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:50:06 crc kubenswrapper[4736]: I0214 11:50:06.397618 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:50:06 crc kubenswrapper[4736]: E0214 11:50:06.398245 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:50:20 crc kubenswrapper[4736]: I0214 11:50:20.407688 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:50:20 crc kubenswrapper[4736]: I0214 11:50:20.671575 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb"} Feb 14 11:51:03 crc kubenswrapper[4736]: I0214 11:51:03.086213 4736 generic.go:334] "Generic (PLEG): container finished" podID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" containerID="fb38cb3721465a5e54e2122776e466c5a383fbb6e19ead75e5ea306db4d2fe0a" exitCode=0 Feb 14 11:51:03 crc kubenswrapper[4736]: I0214 11:51:03.086653 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab2bcae4-a5d8-471d-a031-b0e810759ab1","Type":"ContainerDied","Data":"fb38cb3721465a5e54e2122776e466c5a383fbb6e19ead75e5ea306db4d2fe0a"} Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.535382 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.622972 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623184 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623219 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623256 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623303 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623353 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.623372 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkt9x\" (UniqueName: \"kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x\") pod \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\" (UID: \"ab2bcae4-a5d8-471d-a031-b0e810759ab1\") " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.624254 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data" (OuterVolumeSpecName: "config-data") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.624847 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.629044 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.630177 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.631164 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x" (OuterVolumeSpecName: "kube-api-access-xkt9x") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "kube-api-access-xkt9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.661315 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.661780 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.675157 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.689719 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ab2bcae4-a5d8-471d-a031-b0e810759ab1" (UID: "ab2bcae4-a5d8-471d-a031-b0e810759ab1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.725405 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.725453 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkt9x\" (UniqueName: \"kubernetes.io/projected/ab2bcae4-a5d8-471d-a031-b0e810759ab1-kube-api-access-xkt9x\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.725472 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726342 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726373 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726386 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726400 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2bcae4-a5d8-471d-a031-b0e810759ab1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726415 4736 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab2bcae4-a5d8-471d-a031-b0e810759ab1-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.726432 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab2bcae4-a5d8-471d-a031-b0e810759ab1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.754393 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 14 11:51:04 crc kubenswrapper[4736]: I0214 11:51:04.828813 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:05 crc kubenswrapper[4736]: I0214 11:51:05.106144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab2bcae4-a5d8-471d-a031-b0e810759ab1","Type":"ContainerDied","Data":"919f2dd88d220755b40df4665a1818a1bf39d6c3decc815a6911eb17332c3c74"} Feb 14 11:51:05 crc kubenswrapper[4736]: I0214 11:51:05.106191 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="919f2dd88d220755b40df4665a1818a1bf39d6c3decc815a6911eb17332c3c74" Feb 14 11:51:05 crc kubenswrapper[4736]: I0214 11:51:05.106261 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.780444 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 11:51:09 crc kubenswrapper[4736]: E0214 11:51:09.782041 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="extract-content" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782069 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="extract-content" Feb 14 11:51:09 crc kubenswrapper[4736]: E0214 11:51:09.782107 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="registry-server" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782121 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="registry-server" Feb 14 11:51:09 crc kubenswrapper[4736]: E0214 11:51:09.782153 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" containerName="tempest-tests-tempest-tests-runner" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782171 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" containerName="tempest-tests-tempest-tests-runner" Feb 14 11:51:09 crc kubenswrapper[4736]: E0214 11:51:09.782196 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="extract-utilities" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782208 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="extract-utilities" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782543 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2bcae4-a5d8-471d-a031-b0e810759ab1" containerName="tempest-tests-tempest-tests-runner" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.782582 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e8ed08-9b78-45be-a6a6-3cefda8fad3d" containerName="registry-server" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.783642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.788943 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-m5rsw" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.793303 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.843930 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s59xm\" (UniqueName: \"kubernetes.io/projected/fb072c2c-7982-4e0b-825a-1b64b951f0a7-kube-api-access-s59xm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.844110 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.946330 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.946487 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s59xm\" (UniqueName: \"kubernetes.io/projected/fb072c2c-7982-4e0b-825a-1b64b951f0a7-kube-api-access-s59xm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.947201 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.970942 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s59xm\" (UniqueName: \"kubernetes.io/projected/fb072c2c-7982-4e0b-825a-1b64b951f0a7-kube-api-access-s59xm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:09 crc kubenswrapper[4736]: I0214 11:51:09.974973 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb072c2c-7982-4e0b-825a-1b64b951f0a7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:10 crc kubenswrapper[4736]: I0214 11:51:10.110852 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 11:51:10 crc kubenswrapper[4736]: I0214 11:51:10.603378 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 11:51:11 crc kubenswrapper[4736]: I0214 11:51:11.174588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fb072c2c-7982-4e0b-825a-1b64b951f0a7","Type":"ContainerStarted","Data":"7415414154aa1537f04d231bc572125878d6ac358257656e241638233f90457e"} Feb 14 11:51:12 crc kubenswrapper[4736]: I0214 11:51:12.185441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fb072c2c-7982-4e0b-825a-1b64b951f0a7","Type":"ContainerStarted","Data":"7d58f3f293b1656dcea52d636eecedd349e3b375fc778b2e74e3418658080538"} Feb 14 11:51:12 crc kubenswrapper[4736]: I0214 11:51:12.209302 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9524176629999999 podStartE2EDuration="3.209282025s" podCreationTimestamp="2026-02-14 11:51:09 +0000 UTC" firstStartedPulling="2026-02-14 11:51:10.614961694 +0000 UTC m=+4180.983589062" lastFinishedPulling="2026-02-14 11:51:11.871826046 +0000 UTC m=+4182.240453424" observedRunningTime="2026-02-14 11:51:12.208107462 +0000 UTC m=+4182.576734840" watchObservedRunningTime="2026-02-14 11:51:12.209282025 +0000 UTC m=+4182.577909413" Feb 14 11:51:29 crc kubenswrapper[4736]: I0214 11:51:29.962523 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:29 crc kubenswrapper[4736]: I0214 11:51:29.967804 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:29 crc kubenswrapper[4736]: I0214 11:51:29.985444 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.053728 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69hs\" (UniqueName: \"kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.053865 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.053903 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.155816 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.155885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.155994 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x69hs\" (UniqueName: \"kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.156353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.157072 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.187772 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x69hs\" (UniqueName: \"kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs\") pod \"redhat-marketplace-xk4fl\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:30 crc kubenswrapper[4736]: I0214 11:51:30.305085 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:31 crc kubenswrapper[4736]: I0214 11:51:31.350288 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:31 crc kubenswrapper[4736]: I0214 11:51:31.434122 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerStarted","Data":"af922142b07b0f4eb7e85e4c3f07463c2235910b07a8d3425eb5ec01051cf6eb"} Feb 14 11:51:32 crc kubenswrapper[4736]: I0214 11:51:32.447875 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerID="be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4" exitCode=0 Feb 14 11:51:32 crc kubenswrapper[4736]: I0214 11:51:32.448009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerDied","Data":"be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4"} Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.229019 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7p982/must-gather-ksxhl"] Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.230768 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.232839 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7p982"/"kube-root-ca.crt" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.233523 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7p982"/"openshift-service-ca.crt" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.241137 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7p982"/"default-dockercfg-78f2v" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.250104 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7p982/must-gather-ksxhl"] Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.327296 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllc4\" (UniqueName: \"kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.327647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.429734 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.429993 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dllc4\" (UniqueName: \"kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.430146 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.489006 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dllc4\" (UniqueName: \"kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4\") pod \"must-gather-ksxhl\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.552803 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.555976 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.566801 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.566903 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.743870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb2j\" (UniqueName: \"kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.744132 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.744232 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.846664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.846770 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb2j\" (UniqueName: \"kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.846789 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.847546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:33 crc kubenswrapper[4736]: I0214 11:51:33.847777 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.148987 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7p982/must-gather-ksxhl"] Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.189605 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb2j\" (UniqueName: \"kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j\") pod \"community-operators-mkfzh\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.271100 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.481079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/must-gather-ksxhl" event={"ID":"6039b228-04e4-4ce5-817b-192d8fdec1be","Type":"ContainerStarted","Data":"f7c53137b8f4df4e78c45911f6beab97ec8cdadbe58dbbd33a62958cc6cb0e40"} Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.483409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerStarted","Data":"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb"} Feb 14 11:51:34 crc kubenswrapper[4736]: I0214 11:51:34.925161 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:34 crc kubenswrapper[4736]: W0214 11:51:34.941195 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa1dfe8_b647_4b05_9199_bfd8788d0205.slice/crio-62d05b168a68aa5b91a54af11a2e30d864f3aac6595a5d4e661c0ca242104f94 WatchSource:0}: Error finding container 62d05b168a68aa5b91a54af11a2e30d864f3aac6595a5d4e661c0ca242104f94: Status 404 returned error can't find the container with id 62d05b168a68aa5b91a54af11a2e30d864f3aac6595a5d4e661c0ca242104f94 Feb 14 11:51:35 crc kubenswrapper[4736]: I0214 11:51:35.497122 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerID="c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb" exitCode=0 Feb 14 11:51:35 crc kubenswrapper[4736]: I0214 11:51:35.497196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerDied","Data":"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb"} Feb 14 11:51:35 crc kubenswrapper[4736]: I0214 11:51:35.501492 4736 generic.go:334] "Generic (PLEG): container finished" podID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerID="b8de2ce82477867c03adcd716334c3f5f9fca57ba32b798305c0e704386a73ea" exitCode=0 Feb 14 11:51:35 crc kubenswrapper[4736]: I0214 11:51:35.501535 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerDied","Data":"b8de2ce82477867c03adcd716334c3f5f9fca57ba32b798305c0e704386a73ea"} Feb 14 11:51:35 crc kubenswrapper[4736]: I0214 11:51:35.501562 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerStarted","Data":"62d05b168a68aa5b91a54af11a2e30d864f3aac6595a5d4e661c0ca242104f94"} Feb 14 11:51:36 crc kubenswrapper[4736]: I0214 11:51:36.517031 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerStarted","Data":"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f"} Feb 14 11:51:36 crc kubenswrapper[4736]: I0214 11:51:36.525609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerStarted","Data":"b2ac621c909287fd7178755b412f2d656ef23bf31dd0dc98e20f1307000bbf12"} Feb 14 11:51:36 crc kubenswrapper[4736]: I0214 11:51:36.548411 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xk4fl" podStartSLOduration=4.089716224 podStartE2EDuration="7.548391798s" podCreationTimestamp="2026-02-14 11:51:29 +0000 UTC" firstStartedPulling="2026-02-14 11:51:32.451007844 +0000 UTC m=+4202.819635212" lastFinishedPulling="2026-02-14 11:51:35.909683418 +0000 UTC m=+4206.278310786" observedRunningTime="2026-02-14 11:51:36.541199656 +0000 UTC m=+4206.909827024" watchObservedRunningTime="2026-02-14 11:51:36.548391798 +0000 UTC m=+4206.917019166" Feb 14 11:51:38 crc kubenswrapper[4736]: I0214 11:51:38.542539 4736 generic.go:334] "Generic (PLEG): container finished" podID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerID="b2ac621c909287fd7178755b412f2d656ef23bf31dd0dc98e20f1307000bbf12" exitCode=0 Feb 14 11:51:38 crc kubenswrapper[4736]: I0214 11:51:38.542585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerDied","Data":"b2ac621c909287fd7178755b412f2d656ef23bf31dd0dc98e20f1307000bbf12"} Feb 14 11:51:40 crc kubenswrapper[4736]: I0214 11:51:40.305504 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:40 crc kubenswrapper[4736]: I0214 11:51:40.305782 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:40 crc kubenswrapper[4736]: I0214 11:51:40.359058 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:43 crc kubenswrapper[4736]: I0214 11:51:43.590511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/must-gather-ksxhl" event={"ID":"6039b228-04e4-4ce5-817b-192d8fdec1be","Type":"ContainerStarted","Data":"01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933"} Feb 14 11:51:43 crc kubenswrapper[4736]: I0214 11:51:43.591199 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/must-gather-ksxhl" event={"ID":"6039b228-04e4-4ce5-817b-192d8fdec1be","Type":"ContainerStarted","Data":"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a"} Feb 14 11:51:43 crc kubenswrapper[4736]: I0214 11:51:43.593417 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerStarted","Data":"1fb7a34e661a5625feab5d6649f96f72a15a61e8b894c4634ff67d769a3df022"} Feb 14 11:51:43 crc kubenswrapper[4736]: I0214 11:51:43.618221 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7p982/must-gather-ksxhl" podStartSLOduration=2.343670914 podStartE2EDuration="10.618200678s" podCreationTimestamp="2026-02-14 11:51:33 +0000 UTC" firstStartedPulling="2026-02-14 11:51:34.299783359 +0000 UTC m=+4204.668410727" lastFinishedPulling="2026-02-14 11:51:42.574313113 +0000 UTC m=+4212.942940491" observedRunningTime="2026-02-14 11:51:43.614215266 +0000 UTC m=+4213.982842634" watchObservedRunningTime="2026-02-14 11:51:43.618200678 +0000 UTC m=+4213.986828056" Feb 14 11:51:43 crc kubenswrapper[4736]: I0214 11:51:43.645782 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkfzh" podStartSLOduration=3.6053919629999998 podStartE2EDuration="10.645729773s" podCreationTimestamp="2026-02-14 11:51:33 +0000 UTC" firstStartedPulling="2026-02-14 11:51:35.504239166 +0000 UTC m=+4205.872866534" lastFinishedPulling="2026-02-14 11:51:42.544576966 +0000 UTC m=+4212.913204344" observedRunningTime="2026-02-14 11:51:43.630782563 +0000 UTC m=+4213.999409931" watchObservedRunningTime="2026-02-14 11:51:43.645729773 +0000 UTC m=+4214.014357161" Feb 14 11:51:44 crc kubenswrapper[4736]: I0214 11:51:44.272418 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:44 crc kubenswrapper[4736]: I0214 11:51:44.272469 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:45 crc kubenswrapper[4736]: I0214 11:51:45.452302 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mkfzh" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="registry-server" probeResult="failure" output=< Feb 14 11:51:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:51:45 crc kubenswrapper[4736]: > Feb 14 11:51:46 crc kubenswrapper[4736]: E0214 11:51:46.815336 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.212:34320->38.102.83.212:35011: write tcp 38.102.83.212:34320->38.102.83.212:35011: write: connection reset by peer Feb 14 11:51:47 crc kubenswrapper[4736]: I0214 11:51:47.955148 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7p982/crc-debug-95hv2"] Feb 14 11:51:47 crc kubenswrapper[4736]: I0214 11:51:47.956280 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.148541 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcxh\" (UniqueName: \"kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.149269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.251728 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcxh\" (UniqueName: \"kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.251801 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.251965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.270527 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcxh\" (UniqueName: \"kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh\") pod \"crc-debug-95hv2\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.276767 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:51:48 crc kubenswrapper[4736]: W0214 11:51:48.307793 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod435fe0b1_7f5a_4a67_b7da_5b0e23561255.slice/crio-77516e8ed973cb51ce0f8ef1ccd6d8061953f14b24185ac4520331931e62daf0 WatchSource:0}: Error finding container 77516e8ed973cb51ce0f8ef1ccd6d8061953f14b24185ac4520331931e62daf0: Status 404 returned error can't find the container with id 77516e8ed973cb51ce0f8ef1ccd6d8061953f14b24185ac4520331931e62daf0 Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.310231 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:51:48 crc kubenswrapper[4736]: I0214 11:51:48.635477 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-95hv2" event={"ID":"435fe0b1-7f5a-4a67-b7da-5b0e23561255","Type":"ContainerStarted","Data":"77516e8ed973cb51ce0f8ef1ccd6d8061953f14b24185ac4520331931e62daf0"} Feb 14 11:51:50 crc kubenswrapper[4736]: I0214 11:51:50.361950 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:51 crc kubenswrapper[4736]: I0214 11:51:51.738543 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:51 crc kubenswrapper[4736]: I0214 11:51:51.739281 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xk4fl" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="registry-server" containerID="cri-o://a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f" gracePeriod=2 Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.484112 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.534295 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content\") pod \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.534368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities\") pod \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.534412 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x69hs\" (UniqueName: \"kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs\") pod \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\" (UID: \"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7\") " Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.535157 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities" (OuterVolumeSpecName: "utilities") pod "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" (UID: "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.541438 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs" (OuterVolumeSpecName: "kube-api-access-x69hs") pod "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" (UID: "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7"). InnerVolumeSpecName "kube-api-access-x69hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.564088 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" (UID: "b0945f07-c306-44b5-9eb9-2e88f3f0f0b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.636821 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.637062 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.637072 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x69hs\" (UniqueName: \"kubernetes.io/projected/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7-kube-api-access-x69hs\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.678431 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerID="a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f" exitCode=0 Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.678475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerDied","Data":"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f"} Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.678498 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk4fl" event={"ID":"b0945f07-c306-44b5-9eb9-2e88f3f0f0b7","Type":"ContainerDied","Data":"af922142b07b0f4eb7e85e4c3f07463c2235910b07a8d3425eb5ec01051cf6eb"} Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.678509 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk4fl" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.678515 4736 scope.go:117] "RemoveContainer" containerID="a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.710492 4736 scope.go:117] "RemoveContainer" containerID="c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.717303 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.735904 4736 scope.go:117] "RemoveContainer" containerID="be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.741789 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk4fl"] Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.779873 4736 scope.go:117] "RemoveContainer" containerID="a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f" Feb 14 11:51:52 crc kubenswrapper[4736]: E0214 11:51:52.780465 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f\": container with ID starting with a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f not found: ID does not exist" containerID="a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.780501 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f"} err="failed to get container status \"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f\": rpc error: code = NotFound desc = could not find container \"a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f\": container with ID starting with a95bb2268ab3e26b41a43c33c824a6e43a10fe25fc8cc74a795883c80583b46f not found: ID does not exist" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.780528 4736 scope.go:117] "RemoveContainer" containerID="c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb" Feb 14 11:51:52 crc kubenswrapper[4736]: E0214 11:51:52.780909 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb\": container with ID starting with c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb not found: ID does not exist" containerID="c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.780953 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb"} err="failed to get container status \"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb\": rpc error: code = NotFound desc = could not find container \"c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb\": container with ID starting with c536cbe7f9420a7678179e785256e761ff6157c391497cb12afa6a7de96fcbcb not found: ID does not exist" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.780969 4736 scope.go:117] "RemoveContainer" containerID="be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4" Feb 14 11:51:52 crc kubenswrapper[4736]: E0214 11:51:52.781371 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4\": container with ID starting with be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4 not found: ID does not exist" containerID="be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4" Feb 14 11:51:52 crc kubenswrapper[4736]: I0214 11:51:52.781392 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4"} err="failed to get container status \"be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4\": rpc error: code = NotFound desc = could not find container \"be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4\": container with ID starting with be9ede8f205325abead60ed9c25a8ff1478c2c91582b879a84bf426231be98a4 not found: ID does not exist" Feb 14 11:51:54 crc kubenswrapper[4736]: I0214 11:51:54.331266 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:54 crc kubenswrapper[4736]: I0214 11:51:54.385549 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:54 crc kubenswrapper[4736]: I0214 11:51:54.410927 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" path="/var/lib/kubelet/pods/b0945f07-c306-44b5-9eb9-2e88f3f0f0b7/volumes" Feb 14 11:51:54 crc kubenswrapper[4736]: I0214 11:51:54.931704 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:55 crc kubenswrapper[4736]: I0214 11:51:55.712446 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkfzh" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="registry-server" containerID="cri-o://1fb7a34e661a5625feab5d6649f96f72a15a61e8b894c4634ff67d769a3df022" gracePeriod=2 Feb 14 11:51:56 crc kubenswrapper[4736]: I0214 11:51:56.725927 4736 generic.go:334] "Generic (PLEG): container finished" podID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerID="1fb7a34e661a5625feab5d6649f96f72a15a61e8b894c4634ff67d769a3df022" exitCode=0 Feb 14 11:51:56 crc kubenswrapper[4736]: I0214 11:51:56.725985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerDied","Data":"1fb7a34e661a5625feab5d6649f96f72a15a61e8b894c4634ff67d769a3df022"} Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.649713 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.680597 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgb2j\" (UniqueName: \"kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j\") pod \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.680677 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities\") pod \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.680699 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content\") pod \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\" (UID: \"0fa1dfe8-b647-4b05-9199-bfd8788d0205\") " Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.681580 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities" (OuterVolumeSpecName: "utilities") pod "0fa1dfe8-b647-4b05-9199-bfd8788d0205" (UID: "0fa1dfe8-b647-4b05-9199-bfd8788d0205"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.688632 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j" (OuterVolumeSpecName: "kube-api-access-vgb2j") pod "0fa1dfe8-b647-4b05-9199-bfd8788d0205" (UID: "0fa1dfe8-b647-4b05-9199-bfd8788d0205"). InnerVolumeSpecName "kube-api-access-vgb2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.723552 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fa1dfe8-b647-4b05-9199-bfd8788d0205" (UID: "0fa1dfe8-b647-4b05-9199-bfd8788d0205"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.756464 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-95hv2" event={"ID":"435fe0b1-7f5a-4a67-b7da-5b0e23561255","Type":"ContainerStarted","Data":"70790b89ff1f455ca11272ed1f2d4a85bb6a1e950030267ca5572718783932bb"} Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.759616 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkfzh" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.759574 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkfzh" event={"ID":"0fa1dfe8-b647-4b05-9199-bfd8788d0205","Type":"ContainerDied","Data":"62d05b168a68aa5b91a54af11a2e30d864f3aac6595a5d4e661c0ca242104f94"} Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.760117 4736 scope.go:117] "RemoveContainer" containerID="1fb7a34e661a5625feab5d6649f96f72a15a61e8b894c4634ff67d769a3df022" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.770809 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7p982/crc-debug-95hv2" podStartSLOduration=1.795057911 podStartE2EDuration="12.769734225s" podCreationTimestamp="2026-02-14 11:51:47 +0000 UTC" firstStartedPulling="2026-02-14 11:51:48.309962505 +0000 UTC m=+4218.678589873" lastFinishedPulling="2026-02-14 11:51:59.284638819 +0000 UTC m=+4229.653266187" observedRunningTime="2026-02-14 11:51:59.76776355 +0000 UTC m=+4230.136390918" watchObservedRunningTime="2026-02-14 11:51:59.769734225 +0000 UTC m=+4230.138361593" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.782852 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgb2j\" (UniqueName: \"kubernetes.io/projected/0fa1dfe8-b647-4b05-9199-bfd8788d0205-kube-api-access-vgb2j\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.782973 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.782985 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa1dfe8-b647-4b05-9199-bfd8788d0205-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.799307 4736 scope.go:117] "RemoveContainer" containerID="b2ac621c909287fd7178755b412f2d656ef23bf31dd0dc98e20f1307000bbf12" Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.800720 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.813284 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkfzh"] Feb 14 11:51:59 crc kubenswrapper[4736]: I0214 11:51:59.824801 4736 scope.go:117] "RemoveContainer" containerID="b8de2ce82477867c03adcd716334c3f5f9fca57ba32b798305c0e704386a73ea" Feb 14 11:52:00 crc kubenswrapper[4736]: I0214 11:52:00.419567 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" path="/var/lib/kubelet/pods/0fa1dfe8-b647-4b05-9199-bfd8788d0205/volumes" Feb 14 11:52:47 crc kubenswrapper[4736]: I0214 11:52:47.694993 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:52:47 crc kubenswrapper[4736]: I0214 11:52:47.695637 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:52:48 crc kubenswrapper[4736]: I0214 11:52:48.207025 4736 generic.go:334] "Generic (PLEG): container finished" podID="435fe0b1-7f5a-4a67-b7da-5b0e23561255" containerID="70790b89ff1f455ca11272ed1f2d4a85bb6a1e950030267ca5572718783932bb" exitCode=0 Feb 14 11:52:48 crc kubenswrapper[4736]: I0214 11:52:48.207207 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-95hv2" event={"ID":"435fe0b1-7f5a-4a67-b7da-5b0e23561255","Type":"ContainerDied","Data":"70790b89ff1f455ca11272ed1f2d4a85bb6a1e950030267ca5572718783932bb"} Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.303693 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.340609 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7p982/crc-debug-95hv2"] Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.349301 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7p982/crc-debug-95hv2"] Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.442479 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhcxh\" (UniqueName: \"kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh\") pod \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.442599 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host\") pod \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\" (UID: \"435fe0b1-7f5a-4a67-b7da-5b0e23561255\") " Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.442715 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host" (OuterVolumeSpecName: "host") pod "435fe0b1-7f5a-4a67-b7da-5b0e23561255" (UID: "435fe0b1-7f5a-4a67-b7da-5b0e23561255"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.443142 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/435fe0b1-7f5a-4a67-b7da-5b0e23561255-host\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.460023 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh" (OuterVolumeSpecName: "kube-api-access-xhcxh") pod "435fe0b1-7f5a-4a67-b7da-5b0e23561255" (UID: "435fe0b1-7f5a-4a67-b7da-5b0e23561255"). InnerVolumeSpecName "kube-api-access-xhcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:52:49 crc kubenswrapper[4736]: I0214 11:52:49.544857 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhcxh\" (UniqueName: \"kubernetes.io/projected/435fe0b1-7f5a-4a67-b7da-5b0e23561255-kube-api-access-xhcxh\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.231213 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77516e8ed973cb51ce0f8ef1ccd6d8061953f14b24185ac4520331931e62daf0" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.231269 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-95hv2" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.407519 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435fe0b1-7f5a-4a67-b7da-5b0e23561255" path="/var/lib/kubelet/pods/435fe0b1-7f5a-4a67-b7da-5b0e23561255/volumes" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586021 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7p982/crc-debug-wwqjb"] Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586360 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="extract-utilities" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586377 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="extract-utilities" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586400 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586408 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586419 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="extract-utilities" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586427 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="extract-utilities" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586438 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586444 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586469 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fe0b1-7f5a-4a67-b7da-5b0e23561255" containerName="container-00" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586476 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fe0b1-7f5a-4a67-b7da-5b0e23561255" containerName="container-00" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586485 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="extract-content" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586491 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="extract-content" Feb 14 11:52:50 crc kubenswrapper[4736]: E0214 11:52:50.586507 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="extract-content" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586512 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="extract-content" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586668 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa1dfe8-b647-4b05-9199-bfd8788d0205" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586694 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0945f07-c306-44b5-9eb9-2e88f3f0f0b7" containerName="registry-server" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.586704 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="435fe0b1-7f5a-4a67-b7da-5b0e23561255" containerName="container-00" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.587265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.664259 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4vb\" (UniqueName: \"kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.664532 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.766601 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4vb\" (UniqueName: \"kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.766657 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.766856 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.783718 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4vb\" (UniqueName: \"kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb\") pod \"crc-debug-wwqjb\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:50 crc kubenswrapper[4736]: I0214 11:52:50.907711 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:51 crc kubenswrapper[4736]: I0214 11:52:51.240155 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-wwqjb" event={"ID":"ac78aef8-106f-4aef-8150-2eb9b055f449","Type":"ContainerStarted","Data":"fd3ec692bcc9585e85da50fd821bc90526df624fd08fb5b98127741534d0100b"} Feb 14 11:52:52 crc kubenswrapper[4736]: I0214 11:52:52.249757 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac78aef8-106f-4aef-8150-2eb9b055f449" containerID="6b44c7d525b10dc37dd8b95847e948b89007dabf721c4257d86b422ce4658493" exitCode=0 Feb 14 11:52:52 crc kubenswrapper[4736]: I0214 11:52:52.249860 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-wwqjb" event={"ID":"ac78aef8-106f-4aef-8150-2eb9b055f449","Type":"ContainerDied","Data":"6b44c7d525b10dc37dd8b95847e948b89007dabf721c4257d86b422ce4658493"} Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.350190 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.429638 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host\") pod \"ac78aef8-106f-4aef-8150-2eb9b055f449\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.430027 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz4vb\" (UniqueName: \"kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb\") pod \"ac78aef8-106f-4aef-8150-2eb9b055f449\" (UID: \"ac78aef8-106f-4aef-8150-2eb9b055f449\") " Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.430221 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host" (OuterVolumeSpecName: "host") pod "ac78aef8-106f-4aef-8150-2eb9b055f449" (UID: "ac78aef8-106f-4aef-8150-2eb9b055f449"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.430720 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac78aef8-106f-4aef-8150-2eb9b055f449-host\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.454968 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb" (OuterVolumeSpecName: "kube-api-access-rz4vb") pod "ac78aef8-106f-4aef-8150-2eb9b055f449" (UID: "ac78aef8-106f-4aef-8150-2eb9b055f449"). InnerVolumeSpecName "kube-api-access-rz4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:52:53 crc kubenswrapper[4736]: I0214 11:52:53.532164 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz4vb\" (UniqueName: \"kubernetes.io/projected/ac78aef8-106f-4aef-8150-2eb9b055f449-kube-api-access-rz4vb\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.266723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-wwqjb" event={"ID":"ac78aef8-106f-4aef-8150-2eb9b055f449","Type":"ContainerDied","Data":"fd3ec692bcc9585e85da50fd821bc90526df624fd08fb5b98127741534d0100b"} Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.266783 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd3ec692bcc9585e85da50fd821bc90526df624fd08fb5b98127741534d0100b" Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.266785 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-wwqjb" Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.383175 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7p982/crc-debug-wwqjb"] Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.391708 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7p982/crc-debug-wwqjb"] Feb 14 11:52:54 crc kubenswrapper[4736]: I0214 11:52:54.406755 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac78aef8-106f-4aef-8150-2eb9b055f449" path="/var/lib/kubelet/pods/ac78aef8-106f-4aef-8150-2eb9b055f449/volumes" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.150272 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7p982/crc-debug-b8lgt"] Feb 14 11:52:56 crc kubenswrapper[4736]: E0214 11:52:56.150944 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac78aef8-106f-4aef-8150-2eb9b055f449" containerName="container-00" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.150960 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac78aef8-106f-4aef-8150-2eb9b055f449" containerName="container-00" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.151163 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac78aef8-106f-4aef-8150-2eb9b055f449" containerName="container-00" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.151728 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.296814 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.297238 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk52h\" (UniqueName: \"kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.398358 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk52h\" (UniqueName: \"kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.398501 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.398593 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.429534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk52h\" (UniqueName: \"kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h\") pod \"crc-debug-b8lgt\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:56 crc kubenswrapper[4736]: I0214 11:52:56.469367 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:57 crc kubenswrapper[4736]: I0214 11:52:57.293977 4736 generic.go:334] "Generic (PLEG): container finished" podID="f563c023-e472-483a-a7fc-8e7ae45c6b57" containerID="dec29cb8aafe113b59a2738d2e1fadf5db1db91ff809587865583bbd3c30e8db" exitCode=0 Feb 14 11:52:57 crc kubenswrapper[4736]: I0214 11:52:57.294087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-b8lgt" event={"ID":"f563c023-e472-483a-a7fc-8e7ae45c6b57","Type":"ContainerDied","Data":"dec29cb8aafe113b59a2738d2e1fadf5db1db91ff809587865583bbd3c30e8db"} Feb 14 11:52:57 crc kubenswrapper[4736]: I0214 11:52:57.295017 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/crc-debug-b8lgt" event={"ID":"f563c023-e472-483a-a7fc-8e7ae45c6b57","Type":"ContainerStarted","Data":"d151788b44772c95f12e6085da692941d69db95de9a3ec9d751aae9d40deb578"} Feb 14 11:52:57 crc kubenswrapper[4736]: I0214 11:52:57.341004 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7p982/crc-debug-b8lgt"] Feb 14 11:52:57 crc kubenswrapper[4736]: I0214 11:52:57.354331 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7p982/crc-debug-b8lgt"] Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.413732 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.545034 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk52h\" (UniqueName: \"kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h\") pod \"f563c023-e472-483a-a7fc-8e7ae45c6b57\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.545324 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host\") pod \"f563c023-e472-483a-a7fc-8e7ae45c6b57\" (UID: \"f563c023-e472-483a-a7fc-8e7ae45c6b57\") " Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.545416 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host" (OuterVolumeSpecName: "host") pod "f563c023-e472-483a-a7fc-8e7ae45c6b57" (UID: "f563c023-e472-483a-a7fc-8e7ae45c6b57"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.545733 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f563c023-e472-483a-a7fc-8e7ae45c6b57-host\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.550864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h" (OuterVolumeSpecName: "kube-api-access-hk52h") pod "f563c023-e472-483a-a7fc-8e7ae45c6b57" (UID: "f563c023-e472-483a-a7fc-8e7ae45c6b57"). InnerVolumeSpecName "kube-api-access-hk52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:52:58 crc kubenswrapper[4736]: I0214 11:52:58.647051 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk52h\" (UniqueName: \"kubernetes.io/projected/f563c023-e472-483a-a7fc-8e7ae45c6b57-kube-api-access-hk52h\") on node \"crc\" DevicePath \"\"" Feb 14 11:52:59 crc kubenswrapper[4736]: I0214 11:52:59.310092 4736 scope.go:117] "RemoveContainer" containerID="dec29cb8aafe113b59a2738d2e1fadf5db1db91ff809587865583bbd3c30e8db" Feb 14 11:52:59 crc kubenswrapper[4736]: I0214 11:52:59.310110 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/crc-debug-b8lgt" Feb 14 11:53:00 crc kubenswrapper[4736]: I0214 11:53:00.410598 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f563c023-e472-483a-a7fc-8e7ae45c6b57" path="/var/lib/kubelet/pods/f563c023-e472-483a-a7fc-8e7ae45c6b57/volumes" Feb 14 11:53:17 crc kubenswrapper[4736]: I0214 11:53:17.695221 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:53:17 crc kubenswrapper[4736]: I0214 11:53:17.696914 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:53:32 crc kubenswrapper[4736]: I0214 11:53:32.817084 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79d9bb575d-6pwpg_fe4bb48e-4d5f-4b38-b862-d2fe632087a8/barbican-api/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.200208 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79d9bb575d-6pwpg_fe4bb48e-4d5f-4b38-b862-d2fe632087a8/barbican-api-log/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.242257 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85957cbc8-r7xrw_a1af432c-5ab8-4eb5-87f0-2f9519c1004b/barbican-keystone-listener/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.512596 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85957cbc8-r7xrw_a1af432c-5ab8-4eb5-87f0-2f9519c1004b/barbican-keystone-listener-log/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.561347 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c85f78bf-qrrmn_dc8cc8f5-bfab-490d-be14-44be8090fb21/barbican-worker-log/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.590338 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c85f78bf-qrrmn_dc8cc8f5-bfab-490d-be14-44be8090fb21/barbican-worker/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.841305 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/ceilometer-central-agent/0.log" Feb 14 11:53:33 crc kubenswrapper[4736]: I0214 11:53:33.884758 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd_3bc4af51-ea9d-471b-a6d1-6330e3f48a5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.032373 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/ceilometer-notification-agent/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.092260 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/proxy-httpd/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.136244 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/sg-core/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.265970 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32441c5d-4041-4687-b31b-fb121c4d01a7/cinder-api/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.330600 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32441c5d-4041-4687-b31b-fb121c4d01a7/cinder-api-log/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.418843 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13d207cc-8160-449b-8049-04047efb4b20/cinder-scheduler/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.558535 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13d207cc-8160-449b-8049-04047efb4b20/probe/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.612514 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp_3b5bed78-9221-4954-b969-9a676c00a110/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.802309 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m97sp_38ba8e01-1e02-4937-aeac-badb36edee69/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:34 crc kubenswrapper[4736]: I0214 11:53:34.924686 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/init/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.121629 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/init/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.267440 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd_fdcfdd0a-6f5a-44be-862f-2329a1f0a60c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.312512 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/dnsmasq-dns/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.415328 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_31f01831-be73-46fa-815b-bc32d58fb0fd/glance-httpd/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.522193 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_31f01831-be73-46fa-815b-bc32d58fb0fd/glance-log/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.630780 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f0aa2a69-bea9-4934-9b60-209ecd22eb0a/glance-log/0.log" Feb 14 11:53:35 crc kubenswrapper[4736]: I0214 11:53:35.641535 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f0aa2a69-bea9-4934-9b60-209ecd22eb0a/glance-httpd/0.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.035270 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon/2.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.055032 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon/1.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.331198 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd_928b193b-069f-4f4b-80a6-13c347302fcf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.332838 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon-log/0.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.377862 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vxrr7_a0ab8569-328c-4ffb-89c5-d08ae95e5016/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:36 crc kubenswrapper[4736]: I0214 11:53:36.696370 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a60c0b91-8564-472a-b4a7-8bab9a773d39/kube-state-metrics/0.log" Feb 14 11:53:37 crc kubenswrapper[4736]: I0214 11:53:37.015854 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7677d9df65-nl5rx_6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1/keystone-api/0.log" Feb 14 11:53:37 crc kubenswrapper[4736]: I0214 11:53:37.043941 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw_9e4b30a3-64e1-4f40-b895-41ac069e85f9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:37 crc kubenswrapper[4736]: I0214 11:53:37.615045 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm_2c3c97eb-a17e-429f-84da-df394440c78c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:37 crc kubenswrapper[4736]: I0214 11:53:37.661929 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7979c77cb9-ql2gq_8af015af-390d-4300-95e1-976c308f136c/neutron-httpd/0.log" Feb 14 11:53:37 crc kubenswrapper[4736]: I0214 11:53:37.841615 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7979c77cb9-ql2gq_8af015af-390d-4300-95e1-976c308f136c/neutron-api/0.log" Feb 14 11:53:38 crc kubenswrapper[4736]: I0214 11:53:38.403340 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2cd086e8-4f54-40fa-9f03-f2434e27ce21/nova-cell0-conductor-conductor/0.log" Feb 14 11:53:38 crc kubenswrapper[4736]: I0214 11:53:38.732860 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_22a7ad59-032e-457c-84ee-a3145f286106/nova-cell1-conductor-conductor/0.log" Feb 14 11:53:39 crc kubenswrapper[4736]: I0214 11:53:39.028644 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_56c90a64-d883-4865-a393-2a9aec8e43a8/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 11:53:39 crc kubenswrapper[4736]: I0214 11:53:39.076374 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_056456b3-9911-4a23-9322-7072e9170cbe/nova-api-log/0.log" Feb 14 11:53:39 crc kubenswrapper[4736]: I0214 11:53:39.298091 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sj8v2_e40d6c31-4f67-46cc-b2a2-991133a68003/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:39 crc kubenswrapper[4736]: I0214 11:53:39.407856 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_501f8e75-5b0d-4226-b3d4-3ac92c58911c/nova-metadata-log/0.log" Feb 14 11:53:39 crc kubenswrapper[4736]: I0214 11:53:39.447981 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_056456b3-9911-4a23-9322-7072e9170cbe/nova-api-api/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.158973 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_394f1f5f-0af2-4451-b497-6f15295099a4/nova-scheduler-scheduler/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.381933 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/mysql-bootstrap/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.556862 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/galera/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.605878 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/mysql-bootstrap/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.785562 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/mysql-bootstrap/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.983180 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_501f8e75-5b0d-4226-b3d4-3ac92c58911c/nova-metadata-metadata/0.log" Feb 14 11:53:40 crc kubenswrapper[4736]: I0214 11:53:40.985628 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/mysql-bootstrap/0.log" Feb 14 11:53:41 crc kubenswrapper[4736]: I0214 11:53:41.027318 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/galera/0.log" Feb 14 11:53:41 crc kubenswrapper[4736]: I0214 11:53:41.205782 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec5ce106-52f4-4985-a2b9-99266fe3d2d9/openstackclient/0.log" Feb 14 11:53:41 crc kubenswrapper[4736]: I0214 11:53:41.300077 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cxr2b_1cc96bf2-147a-454d-8443-20e850d25ad0/openstack-network-exporter/0.log" Feb 14 11:53:41 crc kubenswrapper[4736]: I0214 11:53:41.415282 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-msd5j_2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba/ovn-controller/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.084440 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server-init/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.326673 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server-init/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.416500 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovs-vswitchd/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.426225 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.621524 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5wgrt_c0fc1129-2e48-4afa-ad54-fce50eaaeddc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.646515 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d28847a2-993a-4124-b138-ecec67828807/ovn-northd/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.677962 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d28847a2-993a-4124-b138-ecec67828807/openstack-network-exporter/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.902821 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0705ea43-d70f-400e-ac09-07dbebf128ea/openstack-network-exporter/0.log" Feb 14 11:53:42 crc kubenswrapper[4736]: I0214 11:53:42.935299 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0705ea43-d70f-400e-ac09-07dbebf128ea/ovsdbserver-nb/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.141984 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18/openstack-network-exporter/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.163112 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18/ovsdbserver-sb/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.463319 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-557678d96b-tqmtc_ec9d0890-b994-4ada-a802-a43cbe2fc50e/placement-api/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.562428 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/setup-container/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.581915 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-557678d96b-tqmtc_ec9d0890-b994-4ada-a802-a43cbe2fc50e/placement-log/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.752979 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/setup-container/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.854627 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/setup-container/0.log" Feb 14 11:53:43 crc kubenswrapper[4736]: I0214 11:53:43.893939 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/rabbitmq/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.071036 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/setup-container/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.187544 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/rabbitmq/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.210555 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh_9cdcee64-bc6c-40ca-8db3-50335948db44/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.484204 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-75mgr_443efa04-503f-4571-b1a3-d31c88bc0a5c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.592364 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn_f0c54768-3b9b-423a-8099-0282ad3ea027/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.784596 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9xcr4_4795d395-5dcc-4284-b6ee-607b2c9a1f97/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:44 crc kubenswrapper[4736]: I0214 11:53:44.901789 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-kdd4d_a8f1507b-722e-46b0-a239-48a5100e9971/ssh-known-hosts-edpm-deployment/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.089918 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6f565b75-vzhbj_6c072889-cf21-4f12-a6eb-14fe8409b860/proxy-server/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.188927 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ccs82_02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a/swift-ring-rebalance/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.246878 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6f565b75-vzhbj_6c072889-cf21-4f12-a6eb-14fe8409b860/proxy-httpd/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.641736 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-auditor/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.717382 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-reaper/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.745091 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-replicator/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.782725 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-server/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.929255 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-server/0.log" Feb 14 11:53:45 crc kubenswrapper[4736]: I0214 11:53:45.940778 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-auditor/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.034419 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-replicator/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.117107 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-updater/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.163435 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-expirer/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.243411 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-auditor/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.330811 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-replicator/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.418702 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-updater/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.429339 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-server/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.570297 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/swift-recon-cron/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.612047 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/rsync/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.843218 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5_8c44413d-b97e-45f6-80d1-71f5e489c4ac/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:46 crc kubenswrapper[4736]: I0214 11:53:46.965876 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ab2bcae4-a5d8-471d-a031-b0e810759ab1/tempest-tests-tempest-tests-runner/0.log" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.126858 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fb072c2c-7982-4e0b-825a-1b64b951f0a7/test-operator-logs-container/0.log" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.345044 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pflph_e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.695255 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.695303 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.695362 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.696271 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:53:47 crc kubenswrapper[4736]: I0214 11:53:47.696339 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb" gracePeriod=600 Feb 14 11:53:48 crc kubenswrapper[4736]: I0214 11:53:48.784280 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb" exitCode=0 Feb 14 11:53:48 crc kubenswrapper[4736]: I0214 11:53:48.784877 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb"} Feb 14 11:53:48 crc kubenswrapper[4736]: I0214 11:53:48.784939 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380"} Feb 14 11:53:48 crc kubenswrapper[4736]: I0214 11:53:48.784958 4736 scope.go:117] "RemoveContainer" containerID="6838e5d04dcc49c55b2cd0998db4b6ef8ba7e24dd5dc57530c9e105b7270189e" Feb 14 11:54:01 crc kubenswrapper[4736]: I0214 11:54:01.735597 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0a412d55-7134-4b50-b303-3174348c85fa/memcached/0.log" Feb 14 11:54:20 crc kubenswrapper[4736]: I0214 11:54:20.694052 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 11:54:20 crc kubenswrapper[4736]: I0214 11:54:20.827383 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 11:54:20 crc kubenswrapper[4736]: I0214 11:54:20.883022 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 11:54:20 crc kubenswrapper[4736]: I0214 11:54:20.945796 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 11:54:21 crc kubenswrapper[4736]: I0214 11:54:21.144582 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 11:54:21 crc kubenswrapper[4736]: I0214 11:54:21.157673 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 11:54:21 crc kubenswrapper[4736]: I0214 11:54:21.177910 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/extract/0.log" Feb 14 11:54:21 crc kubenswrapper[4736]: I0214 11:54:21.608863 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-ddq5f_049efcc4-9d6e-47ff-8476-a29e06c6f362/manager/0.log" Feb 14 11:54:21 crc kubenswrapper[4736]: I0214 11:54:21.994629 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68fd459cc4-lwpwl_8b8b4f4d-ca75-4127-bf64-3db5839a9ccb/manager/0.log" Feb 14 11:54:22 crc kubenswrapper[4736]: I0214 11:54:22.275201 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-g7wc9_f0eae102-9c64-42bb-b7eb-64c54f3bf219/manager/0.log" Feb 14 11:54:22 crc kubenswrapper[4736]: I0214 11:54:22.521773 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-pcttg_c9abe211-c0d9-4487-856f-12a41e4ad006/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.162443 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-mt8zx_55648e35-636d-4321-bdfe-e7171a70e87d/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.217787 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-6wjlq_434321f7-faee-40e8-8d52-6c863d100da6/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.520240 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-7b9sw_0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.572157 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-768c8b45bb-jbwwk_d6185be6-e012-411d-9b85-c971e12aebbd/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.789352 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76fd76856-dmpmv_07e92003-0bdf-4e0b-a35c-d8f96e3a57f8/manager/0.log" Feb 14 11:54:23 crc kubenswrapper[4736]: I0214 11:54:23.871414 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-p2d5f_c2104410-cd10-43d8-84d1-8cd837d65ed4/manager/0.log" Feb 14 11:54:24 crc kubenswrapper[4736]: I0214 11:54:24.310633 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-ptgcj_6a8d2df6-3e2b-4120-8848-9ab5ae903da5/manager/0.log" Feb 14 11:54:24 crc kubenswrapper[4736]: I0214 11:54:24.557667 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-spx2d_fc679b24-ad26-46c8-8d9e-28ef80a48090/manager/0.log" Feb 14 11:54:25 crc kubenswrapper[4736]: I0214 11:54:25.033072 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt_4800ac63-235a-4486-a61b-018e85369028/manager/0.log" Feb 14 11:54:25 crc kubenswrapper[4736]: I0214 11:54:25.509326 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69b468cbcf-657fg_b74fa186-0772-4d4e-abcd-b04bc6fa4751/operator/0.log" Feb 14 11:54:25 crc kubenswrapper[4736]: I0214 11:54:25.750759 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4hxtr_6d54b312-9619-450b-a6b2-980caae9860e/registry-server/0.log" Feb 14 11:54:26 crc kubenswrapper[4736]: I0214 11:54:26.499790 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-9zxzs_05c7d113-70d7-4bbf-9c0e-4981d602acd3/manager/0.log" Feb 14 11:54:26 crc kubenswrapper[4736]: I0214 11:54:26.716354 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-pqhqz_bd3596d4-d10d-45e0-b236-d0cca28bc09b/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.047065 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4x6xz_dd5e2ee2-c48c-40fd-9a02-ce871056600f/operator/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.279018 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-9dgfx_391b46e9-4f14-4b12-9c9a-800eecfc51af/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.538867 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-cztvd_4b0b03c4-b031-408b-a6de-8b3af1064ebd/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.698524 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-h52ld_332bd6ec-7fc0-4c92-bd0e-491f238a8680/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.734595 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-qr776_692863b5-b658-4d50-928e-b5357a279851/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.846644 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f46fb7bd6-whwbk_18979fdb-9863-4a61-a6cc-5984b041d7c6/manager/0.log" Feb 14 11:54:27 crc kubenswrapper[4736]: I0214 11:54:27.973307 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c469bc6bb-2p58b_13ed197e-630c-4788-863e-23be47efe228/manager/0.log" Feb 14 11:54:31 crc kubenswrapper[4736]: I0214 11:54:31.379509 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-vg9f9_a1aa4225-909d-49ae-8ac7-d987a760f2d2/manager/0.log" Feb 14 11:54:50 crc kubenswrapper[4736]: I0214 11:54:50.702083 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8bxbt_b3550c81-1f31-4800-b399-4168db6f20fc/control-plane-machine-set-operator/0.log" Feb 14 11:54:50 crc kubenswrapper[4736]: I0214 11:54:50.849189 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-68p48_b305d178-1f44-4e74-9a0f-9a6c95fb4c45/kube-rbac-proxy/0.log" Feb 14 11:54:50 crc kubenswrapper[4736]: I0214 11:54:50.864404 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-68p48_b305d178-1f44-4e74-9a0f-9a6c95fb4c45/machine-api-operator/0.log" Feb 14 11:55:05 crc kubenswrapper[4736]: I0214 11:55:05.571580 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lsgkg_70c4aa44-ebfe-49e1-9e2a-d4f507794c4e/cert-manager-controller/0.log" Feb 14 11:55:05 crc kubenswrapper[4736]: I0214 11:55:05.706797 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xbtbh_c30450ff-d5e3-482b-9d67-63ac08a238e2/cert-manager-cainjector/0.log" Feb 14 11:55:05 crc kubenswrapper[4736]: I0214 11:55:05.839252 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vg8jq_d7a4fec3-20be-4ba1-838e-45d9a777ba6a/cert-manager-webhook/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.086727 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-989cb_e3223403-6c82-4af7-8a7a-902982281d8b/nmstate-console-plugin/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.339178 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tfkkw_1799375f-7713-43d7-a0b2-9c76efff7daf/nmstate-handler/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.389685 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-krg4j_e04e8849-dd0e-4a3f-98e0-8925563c7145/kube-rbac-proxy/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.486882 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-krg4j_e04e8849-dd0e-4a3f-98e0-8925563c7145/nmstate-metrics/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.582086 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-8d5p5_f9f3eda0-51a6-4de7-86d2-7b68836bcb67/nmstate-operator/0.log" Feb 14 11:55:20 crc kubenswrapper[4736]: I0214 11:55:20.697221 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-2gjv9_73130fe2-047e-44a9-986b-0734857df7a6/nmstate-webhook/0.log" Feb 14 11:55:52 crc kubenswrapper[4736]: I0214 11:55:52.207103 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kxmtf_d4a413eb-d17a-4f7f-bd22-d4f41f915d53/kube-rbac-proxy/0.log" Feb 14 11:55:52 crc kubenswrapper[4736]: I0214 11:55:52.329638 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kxmtf_d4a413eb-d17a-4f7f-bd22-d4f41f915d53/controller/0.log" Feb 14 11:55:52 crc kubenswrapper[4736]: I0214 11:55:52.375637 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-llcsn_78e00005-015f-402a-9308-473763478d28/frr-k8s-webhook-server/0.log" Feb 14 11:55:52 crc kubenswrapper[4736]: I0214 11:55:52.552261 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.098538 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.105893 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.146972 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.191121 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.327543 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.387324 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.400950 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.426901 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.590253 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.596959 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.667230 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.669910 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/controller/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.756855 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/frr-metrics/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.954265 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/kube-rbac-proxy/0.log" Feb 14 11:55:53 crc kubenswrapper[4736]: I0214 11:55:53.999307 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/kube-rbac-proxy-frr/0.log" Feb 14 11:55:54 crc kubenswrapper[4736]: I0214 11:55:54.039264 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/reloader/0.log" Feb 14 11:55:54 crc kubenswrapper[4736]: I0214 11:55:54.343852 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-575f5cbc8b-mg2p4_b64209dc-83e7-4c67-920c-0e8d9369d823/manager/0.log" Feb 14 11:55:54 crc kubenswrapper[4736]: I0214 11:55:54.435930 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69fc489c64-2rjlv_ab7820de-1649-428e-b823-d28364520352/webhook-server/0.log" Feb 14 11:55:54 crc kubenswrapper[4736]: I0214 11:55:54.684309 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tm7cx_7b23b424-9a4f-44c9-a999-7721acb1b135/kube-rbac-proxy/0.log" Feb 14 11:55:55 crc kubenswrapper[4736]: I0214 11:55:55.150551 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tm7cx_7b23b424-9a4f-44c9-a999-7721acb1b135/speaker/0.log" Feb 14 11:55:55 crc kubenswrapper[4736]: I0214 11:55:55.269462 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/frr/0.log" Feb 14 11:56:09 crc kubenswrapper[4736]: I0214 11:56:09.644498 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 11:56:09 crc kubenswrapper[4736]: I0214 11:56:09.803823 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 11:56:09 crc kubenswrapper[4736]: I0214 11:56:09.825460 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 11:56:09 crc kubenswrapper[4736]: I0214 11:56:09.848412 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.047254 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/extract/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.055515 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.085616 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.736083 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.924512 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.969415 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 11:56:10 crc kubenswrapper[4736]: I0214 11:56:10.978470 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.163820 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.166449 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.407389 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.708567 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/registry-server/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.710921 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.713804 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 11:56:11 crc kubenswrapper[4736]: I0214 11:56:11.717185 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.293155 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.325761 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.615866 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.905982 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/registry-server/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.920765 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.929381 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 11:56:12 crc kubenswrapper[4736]: I0214 11:56:12.941599 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.118695 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.159568 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/extract/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.168928 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.281312 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v7kg4_9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5/marketplace-operator/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.372710 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.573377 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.597774 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.618972 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.745446 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.769062 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.936677 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/registry-server/0.log" Feb 14 11:56:13 crc kubenswrapper[4736]: I0214 11:56:13.941319 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.120768 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.123579 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.132034 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.374069 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.387439 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 11:56:14 crc kubenswrapper[4736]: I0214 11:56:14.782243 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/registry-server/0.log" Feb 14 11:56:17 crc kubenswrapper[4736]: I0214 11:56:17.695158 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:56:17 crc kubenswrapper[4736]: I0214 11:56:17.695448 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:56:47 crc kubenswrapper[4736]: I0214 11:56:47.695159 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:56:47 crc kubenswrapper[4736]: I0214 11:56:47.695612 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:57:17 crc kubenswrapper[4736]: I0214 11:57:17.698874 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 11:57:17 crc kubenswrapper[4736]: I0214 11:57:17.699484 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 11:57:17 crc kubenswrapper[4736]: I0214 11:57:17.699558 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 11:57:17 crc kubenswrapper[4736]: I0214 11:57:17.701379 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 11:57:17 crc kubenswrapper[4736]: I0214 11:57:17.701472 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" gracePeriod=600 Feb 14 11:57:17 crc kubenswrapper[4736]: E0214 11:57:17.828738 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:57:18 crc kubenswrapper[4736]: I0214 11:57:18.076664 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" exitCode=0 Feb 14 11:57:18 crc kubenswrapper[4736]: I0214 11:57:18.076707 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380"} Feb 14 11:57:18 crc kubenswrapper[4736]: I0214 11:57:18.076757 4736 scope.go:117] "RemoveContainer" containerID="0c9a9f31049018d899c2d6e7f661d4eda2a270223213dd9c82f3c2316bb40fcb" Feb 14 11:57:18 crc kubenswrapper[4736]: I0214 11:57:18.077706 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:57:18 crc kubenswrapper[4736]: E0214 11:57:18.078286 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:57:30 crc kubenswrapper[4736]: I0214 11:57:30.411403 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:57:30 crc kubenswrapper[4736]: E0214 11:57:30.412266 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:57:41 crc kubenswrapper[4736]: I0214 11:57:41.397405 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:57:41 crc kubenswrapper[4736]: E0214 11:57:41.397992 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:57:55 crc kubenswrapper[4736]: I0214 11:57:55.396858 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:57:55 crc kubenswrapper[4736]: E0214 11:57:55.397706 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.336978 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:08 crc kubenswrapper[4736]: E0214 11:58:08.337949 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f563c023-e472-483a-a7fc-8e7ae45c6b57" containerName="container-00" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.337963 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f563c023-e472-483a-a7fc-8e7ae45c6b57" containerName="container-00" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.338189 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f563c023-e472-483a-a7fc-8e7ae45c6b57" containerName="container-00" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.339567 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.355947 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.499051 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.499150 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g72np\" (UniqueName: \"kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.499189 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.602111 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.602223 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g72np\" (UniqueName: \"kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.602255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.603243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.603964 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.627946 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g72np\" (UniqueName: \"kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np\") pod \"certified-operators-ctc8j\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:08 crc kubenswrapper[4736]: I0214 11:58:08.662485 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:09 crc kubenswrapper[4736]: I0214 11:58:09.373485 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:09 crc kubenswrapper[4736]: I0214 11:58:09.397317 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:58:09 crc kubenswrapper[4736]: E0214 11:58:09.397506 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:58:09 crc kubenswrapper[4736]: I0214 11:58:09.589247 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerStarted","Data":"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66"} Feb 14 11:58:09 crc kubenswrapper[4736]: I0214 11:58:09.589505 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerStarted","Data":"c58bb9d8ba7939bacdfe43f86a5339d90c87deca9233b614e5c4c038f859ef5b"} Feb 14 11:58:10 crc kubenswrapper[4736]: I0214 11:58:10.598655 4736 generic.go:334] "Generic (PLEG): container finished" podID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerID="f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66" exitCode=0 Feb 14 11:58:10 crc kubenswrapper[4736]: I0214 11:58:10.598755 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerDied","Data":"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66"} Feb 14 11:58:10 crc kubenswrapper[4736]: I0214 11:58:10.602187 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 11:58:11 crc kubenswrapper[4736]: I0214 11:58:11.607589 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerStarted","Data":"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5"} Feb 14 11:58:13 crc kubenswrapper[4736]: I0214 11:58:13.626363 4736 generic.go:334] "Generic (PLEG): container finished" podID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerID="0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5" exitCode=0 Feb 14 11:58:13 crc kubenswrapper[4736]: I0214 11:58:13.626641 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerDied","Data":"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5"} Feb 14 11:58:14 crc kubenswrapper[4736]: I0214 11:58:14.647711 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerStarted","Data":"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab"} Feb 14 11:58:14 crc kubenswrapper[4736]: I0214 11:58:14.677106 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ctc8j" podStartSLOduration=3.12116181 podStartE2EDuration="6.677090159s" podCreationTimestamp="2026-02-14 11:58:08 +0000 UTC" firstStartedPulling="2026-02-14 11:58:10.601767854 +0000 UTC m=+4600.970395222" lastFinishedPulling="2026-02-14 11:58:14.157696193 +0000 UTC m=+4604.526323571" observedRunningTime="2026-02-14 11:58:14.670991067 +0000 UTC m=+4605.039618445" watchObservedRunningTime="2026-02-14 11:58:14.677090159 +0000 UTC m=+4605.045717527" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.705582 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.707621 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.726956 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.782376 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.782448 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.782496 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnlx\" (UniqueName: \"kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.884020 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.884099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.884152 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgnlx\" (UniqueName: \"kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.884730 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.884857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:16 crc kubenswrapper[4736]: I0214 11:58:16.908541 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgnlx\" (UniqueName: \"kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx\") pod \"redhat-operators-kx487\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:17 crc kubenswrapper[4736]: I0214 11:58:17.077665 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:17 crc kubenswrapper[4736]: I0214 11:58:17.650876 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:17 crc kubenswrapper[4736]: I0214 11:58:17.673513 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerStarted","Data":"dab7ae383988a7dc586f80b79118149f6bd4c01e097fcd9e1d9b9c4477c56206"} Feb 14 11:58:18 crc kubenswrapper[4736]: I0214 11:58:18.662934 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:18 crc kubenswrapper[4736]: I0214 11:58:18.663349 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:18 crc kubenswrapper[4736]: I0214 11:58:18.683565 4736 generic.go:334] "Generic (PLEG): container finished" podID="de6eae84-e100-457f-af2e-8b060975bd47" containerID="6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68" exitCode=0 Feb 14 11:58:18 crc kubenswrapper[4736]: I0214 11:58:18.683668 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerDied","Data":"6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68"} Feb 14 11:58:18 crc kubenswrapper[4736]: I0214 11:58:18.729329 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:19 crc kubenswrapper[4736]: I0214 11:58:19.782168 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:20 crc kubenswrapper[4736]: I0214 11:58:20.406410 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:58:20 crc kubenswrapper[4736]: E0214 11:58:20.406950 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:58:20 crc kubenswrapper[4736]: I0214 11:58:20.723948 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerStarted","Data":"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1"} Feb 14 11:58:20 crc kubenswrapper[4736]: I0214 11:58:20.897301 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:21 crc kubenswrapper[4736]: I0214 11:58:21.733450 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ctc8j" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="registry-server" containerID="cri-o://2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab" gracePeriod=2 Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.200221 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.321521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g72np\" (UniqueName: \"kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np\") pod \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.321608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities\") pod \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.321641 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content\") pod \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\" (UID: \"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9\") " Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.322353 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities" (OuterVolumeSpecName: "utilities") pod "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" (UID: "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.322613 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.339098 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np" (OuterVolumeSpecName: "kube-api-access-g72np") pod "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" (UID: "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9"). InnerVolumeSpecName "kube-api-access-g72np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.367973 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" (UID: "9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.436989 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g72np\" (UniqueName: \"kubernetes.io/projected/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-kube-api-access-g72np\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.437025 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.744718 4736 generic.go:334] "Generic (PLEG): container finished" podID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerID="2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab" exitCode=0 Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.745208 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctc8j" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.746070 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerDied","Data":"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab"} Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.746104 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctc8j" event={"ID":"9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9","Type":"ContainerDied","Data":"c58bb9d8ba7939bacdfe43f86a5339d90c87deca9233b614e5c4c038f859ef5b"} Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.746135 4736 scope.go:117] "RemoveContainer" containerID="2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.775516 4736 scope.go:117] "RemoveContainer" containerID="0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.780181 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.788395 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ctc8j"] Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.793931 4736 scope.go:117] "RemoveContainer" containerID="f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.834645 4736 scope.go:117] "RemoveContainer" containerID="2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab" Feb 14 11:58:22 crc kubenswrapper[4736]: E0214 11:58:22.835023 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab\": container with ID starting with 2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab not found: ID does not exist" containerID="2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.835065 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab"} err="failed to get container status \"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab\": rpc error: code = NotFound desc = could not find container \"2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab\": container with ID starting with 2c1beb813d057b0bb77a97f0a04d78508d9a97d75491db7fb007fc67c31db5ab not found: ID does not exist" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.835118 4736 scope.go:117] "RemoveContainer" containerID="0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5" Feb 14 11:58:22 crc kubenswrapper[4736]: E0214 11:58:22.835530 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5\": container with ID starting with 0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5 not found: ID does not exist" containerID="0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.835563 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5"} err="failed to get container status \"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5\": rpc error: code = NotFound desc = could not find container \"0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5\": container with ID starting with 0837a1c0e5dbaf7d9744274e5816a2834a6fc7824e5fee5a6c40578c117994b5 not found: ID does not exist" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.835592 4736 scope.go:117] "RemoveContainer" containerID="f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66" Feb 14 11:58:22 crc kubenswrapper[4736]: E0214 11:58:22.835903 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66\": container with ID starting with f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66 not found: ID does not exist" containerID="f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66" Feb 14 11:58:22 crc kubenswrapper[4736]: I0214 11:58:22.836065 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66"} err="failed to get container status \"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66\": rpc error: code = NotFound desc = could not find container \"f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66\": container with ID starting with f8d8e2c2c177db183b2b3a510e6ed8997596229b88064a9a086ea47d03911a66 not found: ID does not exist" Feb 14 11:58:24 crc kubenswrapper[4736]: I0214 11:58:24.410039 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" path="/var/lib/kubelet/pods/9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9/volumes" Feb 14 11:58:25 crc kubenswrapper[4736]: I0214 11:58:25.787409 4736 generic.go:334] "Generic (PLEG): container finished" podID="de6eae84-e100-457f-af2e-8b060975bd47" containerID="93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1" exitCode=0 Feb 14 11:58:25 crc kubenswrapper[4736]: I0214 11:58:25.787413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerDied","Data":"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1"} Feb 14 11:58:26 crc kubenswrapper[4736]: I0214 11:58:26.797460 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerStarted","Data":"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737"} Feb 14 11:58:26 crc kubenswrapper[4736]: I0214 11:58:26.824665 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kx487" podStartSLOduration=3.194094142 podStartE2EDuration="10.824645357s" podCreationTimestamp="2026-02-14 11:58:16 +0000 UTC" firstStartedPulling="2026-02-14 11:58:18.685704176 +0000 UTC m=+4609.054331554" lastFinishedPulling="2026-02-14 11:58:26.316255391 +0000 UTC m=+4616.684882769" observedRunningTime="2026-02-14 11:58:26.814603444 +0000 UTC m=+4617.183230852" watchObservedRunningTime="2026-02-14 11:58:26.824645357 +0000 UTC m=+4617.193272725" Feb 14 11:58:27 crc kubenswrapper[4736]: I0214 11:58:27.078078 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:27 crc kubenswrapper[4736]: I0214 11:58:27.078118 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:28 crc kubenswrapper[4736]: I0214 11:58:28.627565 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kx487" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" probeResult="failure" output=< Feb 14 11:58:28 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:58:28 crc kubenswrapper[4736]: > Feb 14 11:58:31 crc kubenswrapper[4736]: I0214 11:58:31.397838 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:58:31 crc kubenswrapper[4736]: E0214 11:58:31.398576 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:58:37 crc kubenswrapper[4736]: I0214 11:58:37.553003 4736 scope.go:117] "RemoveContainer" containerID="70790b89ff1f455ca11272ed1f2d4a85bb6a1e950030267ca5572718783932bb" Feb 14 11:58:38 crc kubenswrapper[4736]: I0214 11:58:38.133962 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kx487" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" probeResult="failure" output=< Feb 14 11:58:38 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 11:58:38 crc kubenswrapper[4736]: > Feb 14 11:58:39 crc kubenswrapper[4736]: I0214 11:58:39.942847 4736 generic.go:334] "Generic (PLEG): container finished" podID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerID="612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a" exitCode=0 Feb 14 11:58:39 crc kubenswrapper[4736]: I0214 11:58:39.943298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7p982/must-gather-ksxhl" event={"ID":"6039b228-04e4-4ce5-817b-192d8fdec1be","Type":"ContainerDied","Data":"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a"} Feb 14 11:58:39 crc kubenswrapper[4736]: I0214 11:58:39.944405 4736 scope.go:117] "RemoveContainer" containerID="612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a" Feb 14 11:58:40 crc kubenswrapper[4736]: I0214 11:58:40.595209 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7p982_must-gather-ksxhl_6039b228-04e4-4ce5-817b-192d8fdec1be/gather/0.log" Feb 14 11:58:43 crc kubenswrapper[4736]: I0214 11:58:43.397326 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:58:43 crc kubenswrapper[4736]: E0214 11:58:43.398375 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:58:47 crc kubenswrapper[4736]: I0214 11:58:47.146931 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:47 crc kubenswrapper[4736]: I0214 11:58:47.213099 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:47 crc kubenswrapper[4736]: I0214 11:58:47.922575 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.031588 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kx487" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" containerID="cri-o://7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737" gracePeriod=2 Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.498608 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.686926 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content\") pod \"de6eae84-e100-457f-af2e-8b060975bd47\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.687200 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities\") pod \"de6eae84-e100-457f-af2e-8b060975bd47\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.687314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgnlx\" (UniqueName: \"kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx\") pod \"de6eae84-e100-457f-af2e-8b060975bd47\" (UID: \"de6eae84-e100-457f-af2e-8b060975bd47\") " Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.689953 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities" (OuterVolumeSpecName: "utilities") pod "de6eae84-e100-457f-af2e-8b060975bd47" (UID: "de6eae84-e100-457f-af2e-8b060975bd47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.696955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx" (OuterVolumeSpecName: "kube-api-access-fgnlx") pod "de6eae84-e100-457f-af2e-8b060975bd47" (UID: "de6eae84-e100-457f-af2e-8b060975bd47"). InnerVolumeSpecName "kube-api-access-fgnlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.790206 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.790233 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgnlx\" (UniqueName: \"kubernetes.io/projected/de6eae84-e100-457f-af2e-8b060975bd47-kube-api-access-fgnlx\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.808161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de6eae84-e100-457f-af2e-8b060975bd47" (UID: "de6eae84-e100-457f-af2e-8b060975bd47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:58:49 crc kubenswrapper[4736]: I0214 11:58:49.892610 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6eae84-e100-457f-af2e-8b060975bd47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.043144 4736 generic.go:334] "Generic (PLEG): container finished" podID="de6eae84-e100-457f-af2e-8b060975bd47" containerID="7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737" exitCode=0 Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.043184 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerDied","Data":"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737"} Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.043205 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kx487" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.043215 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kx487" event={"ID":"de6eae84-e100-457f-af2e-8b060975bd47","Type":"ContainerDied","Data":"dab7ae383988a7dc586f80b79118149f6bd4c01e097fcd9e1d9b9c4477c56206"} Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.043234 4736 scope.go:117] "RemoveContainer" containerID="7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.066912 4736 scope.go:117] "RemoveContainer" containerID="93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.087793 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.097548 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kx487"] Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.113944 4736 scope.go:117] "RemoveContainer" containerID="6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.157950 4736 scope.go:117] "RemoveContainer" containerID="7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737" Feb 14 11:58:50 crc kubenswrapper[4736]: E0214 11:58:50.158463 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737\": container with ID starting with 7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737 not found: ID does not exist" containerID="7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.158504 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737"} err="failed to get container status \"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737\": rpc error: code = NotFound desc = could not find container \"7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737\": container with ID starting with 7da4e567305ae197f3785d40271af41cc9c86d1988a54d5db12a596b0eca8737 not found: ID does not exist" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.158529 4736 scope.go:117] "RemoveContainer" containerID="93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1" Feb 14 11:58:50 crc kubenswrapper[4736]: E0214 11:58:50.159019 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1\": container with ID starting with 93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1 not found: ID does not exist" containerID="93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.159039 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1"} err="failed to get container status \"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1\": rpc error: code = NotFound desc = could not find container \"93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1\": container with ID starting with 93bae3fc81c2dc94b65ce70414955d5cf24cc6f9eb45a61ad41abd173574eaa1 not found: ID does not exist" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.159051 4736 scope.go:117] "RemoveContainer" containerID="6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68" Feb 14 11:58:50 crc kubenswrapper[4736]: E0214 11:58:50.159510 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68\": container with ID starting with 6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68 not found: ID does not exist" containerID="6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.159536 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68"} err="failed to get container status \"6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68\": rpc error: code = NotFound desc = could not find container \"6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68\": container with ID starting with 6a21a44c75c8d678a4442a030a93a7540e37478bb129d3aa62ea610c3694fd68 not found: ID does not exist" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.193279 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7p982/must-gather-ksxhl"] Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.193881 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7p982/must-gather-ksxhl" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="copy" containerID="cri-o://01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933" gracePeriod=2 Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.208261 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7p982/must-gather-ksxhl"] Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.471462 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6eae84-e100-457f-af2e-8b060975bd47" path="/var/lib/kubelet/pods/de6eae84-e100-457f-af2e-8b060975bd47/volumes" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.658694 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7p982_must-gather-ksxhl_6039b228-04e4-4ce5-817b-192d8fdec1be/copy/0.log" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.659337 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.839683 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output\") pod \"6039b228-04e4-4ce5-817b-192d8fdec1be\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.839928 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dllc4\" (UniqueName: \"kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4\") pod \"6039b228-04e4-4ce5-817b-192d8fdec1be\" (UID: \"6039b228-04e4-4ce5-817b-192d8fdec1be\") " Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.848884 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4" (OuterVolumeSpecName: "kube-api-access-dllc4") pod "6039b228-04e4-4ce5-817b-192d8fdec1be" (UID: "6039b228-04e4-4ce5-817b-192d8fdec1be"). InnerVolumeSpecName "kube-api-access-dllc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 11:58:50 crc kubenswrapper[4736]: I0214 11:58:50.941921 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dllc4\" (UniqueName: \"kubernetes.io/projected/6039b228-04e4-4ce5-817b-192d8fdec1be-kube-api-access-dllc4\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.018101 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6039b228-04e4-4ce5-817b-192d8fdec1be" (UID: "6039b228-04e4-4ce5-817b-192d8fdec1be"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.043342 4736 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6039b228-04e4-4ce5-817b-192d8fdec1be-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.055849 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7p982_must-gather-ksxhl_6039b228-04e4-4ce5-817b-192d8fdec1be/copy/0.log" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.056135 4736 generic.go:334] "Generic (PLEG): container finished" podID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerID="01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933" exitCode=143 Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.056209 4736 scope.go:117] "RemoveContainer" containerID="01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.056178 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7p982/must-gather-ksxhl" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.104314 4736 scope.go:117] "RemoveContainer" containerID="612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.175216 4736 scope.go:117] "RemoveContainer" containerID="01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933" Feb 14 11:58:51 crc kubenswrapper[4736]: E0214 11:58:51.175796 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933\": container with ID starting with 01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933 not found: ID does not exist" containerID="01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.175827 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933"} err="failed to get container status \"01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933\": rpc error: code = NotFound desc = could not find container \"01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933\": container with ID starting with 01a5e19a30787a72be731335954aedce6d656fa8c439ada3d62c9dea38175933 not found: ID does not exist" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.175849 4736 scope.go:117] "RemoveContainer" containerID="612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a" Feb 14 11:58:51 crc kubenswrapper[4736]: E0214 11:58:51.176059 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a\": container with ID starting with 612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a not found: ID does not exist" containerID="612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a" Feb 14 11:58:51 crc kubenswrapper[4736]: I0214 11:58:51.176087 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a"} err="failed to get container status \"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a\": rpc error: code = NotFound desc = could not find container \"612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a\": container with ID starting with 612b463dd94b20745304e974a5a49ed0c0354e9f7cea71d05ce70bd500d7eb3a not found: ID does not exist" Feb 14 11:58:52 crc kubenswrapper[4736]: I0214 11:58:52.409955 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" path="/var/lib/kubelet/pods/6039b228-04e4-4ce5-817b-192d8fdec1be/volumes" Feb 14 11:58:58 crc kubenswrapper[4736]: I0214 11:58:58.397907 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:58:58 crc kubenswrapper[4736]: E0214 11:58:58.398966 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:59:12 crc kubenswrapper[4736]: I0214 11:59:12.399694 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:59:12 crc kubenswrapper[4736]: E0214 11:59:12.400908 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:59:25 crc kubenswrapper[4736]: I0214 11:59:25.398525 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:59:25 crc kubenswrapper[4736]: E0214 11:59:25.399505 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:59:37 crc kubenswrapper[4736]: I0214 11:59:37.396777 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:59:37 crc kubenswrapper[4736]: E0214 11:59:37.397488 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 11:59:37 crc kubenswrapper[4736]: I0214 11:59:37.649196 4736 scope.go:117] "RemoveContainer" containerID="6b44c7d525b10dc37dd8b95847e948b89007dabf721c4257d86b422ce4658493" Feb 14 11:59:48 crc kubenswrapper[4736]: I0214 11:59:48.397969 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 11:59:48 crc kubenswrapper[4736]: E0214 11:59:48.399133 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.211872 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5"] Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213113 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213137 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213165 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="extract-utilities" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213173 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="extract-utilities" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213211 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="extract-content" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213220 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="extract-content" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213236 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="extract-content" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213246 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="extract-content" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213267 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213276 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213299 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="copy" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213309 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="copy" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213323 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="extract-utilities" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213332 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="extract-utilities" Feb 14 12:00:00 crc kubenswrapper[4736]: E0214 12:00:00.213344 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="gather" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213354 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="gather" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213637 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="copy" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213671 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6039b228-04e4-4ce5-817b-192d8fdec1be" containerName="gather" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213698 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6eae84-e100-457f-af2e-8b060975bd47" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.213721 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd6cb36-62e1-4ae8-bf14-b25c20b39ab9" containerName="registry-server" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.215633 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.227958 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.228080 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.247793 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5"] Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.256477 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xph\" (UniqueName: \"kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.256537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.258800 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.360728 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xph\" (UniqueName: \"kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.360792 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.360878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.361582 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.375994 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.380993 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xph\" (UniqueName: \"kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph\") pod \"collect-profiles-29517840-6pln5\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:00 crc kubenswrapper[4736]: I0214 12:00:00.559925 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:01 crc kubenswrapper[4736]: I0214 12:00:01.090335 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5"] Feb 14 12:00:01 crc kubenswrapper[4736]: I0214 12:00:01.397253 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:00:01 crc kubenswrapper[4736]: E0214 12:00:01.397766 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:00:01 crc kubenswrapper[4736]: I0214 12:00:01.851540 4736 generic.go:334] "Generic (PLEG): container finished" podID="284707c8-4c14-405e-8aff-aeff6f484e40" containerID="ad9a3b370345a75ac551157645f15b896de0c845d3c2a94fb9558480f8ea9d00" exitCode=0 Feb 14 12:00:01 crc kubenswrapper[4736]: I0214 12:00:01.851605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" event={"ID":"284707c8-4c14-405e-8aff-aeff6f484e40","Type":"ContainerDied","Data":"ad9a3b370345a75ac551157645f15b896de0c845d3c2a94fb9558480f8ea9d00"} Feb 14 12:00:01 crc kubenswrapper[4736]: I0214 12:00:01.851918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" event={"ID":"284707c8-4c14-405e-8aff-aeff6f484e40","Type":"ContainerStarted","Data":"3a036a321f2fbd6ed1aabe76eddf44533a33ad122e24851f34a59dd8e105a071"} Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.246395 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.319795 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6xph\" (UniqueName: \"kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph\") pod \"284707c8-4c14-405e-8aff-aeff6f484e40\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.319881 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume\") pod \"284707c8-4c14-405e-8aff-aeff6f484e40\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.319953 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume\") pod \"284707c8-4c14-405e-8aff-aeff6f484e40\" (UID: \"284707c8-4c14-405e-8aff-aeff6f484e40\") " Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.320643 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume" (OuterVolumeSpecName: "config-volume") pod "284707c8-4c14-405e-8aff-aeff6f484e40" (UID: "284707c8-4c14-405e-8aff-aeff6f484e40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.324280 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph" (OuterVolumeSpecName: "kube-api-access-f6xph") pod "284707c8-4c14-405e-8aff-aeff6f484e40" (UID: "284707c8-4c14-405e-8aff-aeff6f484e40"). InnerVolumeSpecName "kube-api-access-f6xph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.328704 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "284707c8-4c14-405e-8aff-aeff6f484e40" (UID: "284707c8-4c14-405e-8aff-aeff6f484e40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.422629 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284707c8-4c14-405e-8aff-aeff6f484e40-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.422970 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6xph\" (UniqueName: \"kubernetes.io/projected/284707c8-4c14-405e-8aff-aeff6f484e40-kube-api-access-f6xph\") on node \"crc\" DevicePath \"\"" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.422982 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284707c8-4c14-405e-8aff-aeff6f484e40-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.874212 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" event={"ID":"284707c8-4c14-405e-8aff-aeff6f484e40","Type":"ContainerDied","Data":"3a036a321f2fbd6ed1aabe76eddf44533a33ad122e24851f34a59dd8e105a071"} Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.874440 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a036a321f2fbd6ed1aabe76eddf44533a33ad122e24851f34a59dd8e105a071" Feb 14 12:00:03 crc kubenswrapper[4736]: I0214 12:00:03.874567 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517840-6pln5" Feb 14 12:00:04 crc kubenswrapper[4736]: I0214 12:00:04.339156 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7"] Feb 14 12:00:04 crc kubenswrapper[4736]: I0214 12:00:04.347196 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517795-xcdb7"] Feb 14 12:00:04 crc kubenswrapper[4736]: I0214 12:00:04.407475 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5c0616-3abb-4607-804e-f3c634217dcb" path="/var/lib/kubelet/pods/5c5c0616-3abb-4607-804e-f3c634217dcb/volumes" Feb 14 12:00:14 crc kubenswrapper[4736]: I0214 12:00:14.397290 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:00:14 crc kubenswrapper[4736]: E0214 12:00:14.399651 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:00:28 crc kubenswrapper[4736]: I0214 12:00:28.397315 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:00:28 crc kubenswrapper[4736]: E0214 12:00:28.399126 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:00:37 crc kubenswrapper[4736]: I0214 12:00:37.790159 4736 scope.go:117] "RemoveContainer" containerID="771a55b7c312e88831a05f414a481ffadc0277a83cd3d2a5d66c0ba01d377ecd" Feb 14 12:00:39 crc kubenswrapper[4736]: I0214 12:00:39.398322 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:00:39 crc kubenswrapper[4736]: E0214 12:00:39.399290 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:00:52 crc kubenswrapper[4736]: I0214 12:00:52.398295 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:00:52 crc kubenswrapper[4736]: E0214 12:00:52.400079 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.225493 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29517841-w6gzg"] Feb 14 12:01:00 crc kubenswrapper[4736]: E0214 12:01:00.226703 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284707c8-4c14-405e-8aff-aeff6f484e40" containerName="collect-profiles" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.226719 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="284707c8-4c14-405e-8aff-aeff6f484e40" containerName="collect-profiles" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.235048 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="284707c8-4c14-405e-8aff-aeff6f484e40" containerName="collect-profiles" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.236267 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.246157 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517841-w6gzg"] Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.364693 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.364796 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.364879 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhplt\" (UniqueName: \"kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.364941 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.466462 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.466585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.466619 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.466694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhplt\" (UniqueName: \"kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.474113 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.474280 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.481999 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.500144 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhplt\" (UniqueName: \"kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt\") pod \"keystone-cron-29517841-w6gzg\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:00 crc kubenswrapper[4736]: I0214 12:01:00.556085 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:01 crc kubenswrapper[4736]: I0214 12:01:01.120730 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517841-w6gzg"] Feb 14 12:01:01 crc kubenswrapper[4736]: I0214 12:01:01.902071 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517841-w6gzg" event={"ID":"84846f34-844e-488c-934d-ef0b1563721c","Type":"ContainerStarted","Data":"cbc6020184c9420d7b2e5e1fce9cccd3d89f5b671e027a6a3c4a4a57dfddf961"} Feb 14 12:01:01 crc kubenswrapper[4736]: I0214 12:01:01.902308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517841-w6gzg" event={"ID":"84846f34-844e-488c-934d-ef0b1563721c","Type":"ContainerStarted","Data":"e961b3efe28c2852a417594060d3ec2f3456f9b0195b6fe9f17675178a0bda3f"} Feb 14 12:01:01 crc kubenswrapper[4736]: I0214 12:01:01.929671 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29517841-w6gzg" podStartSLOduration=1.929651156 podStartE2EDuration="1.929651156s" podCreationTimestamp="2026-02-14 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 12:01:01.921674202 +0000 UTC m=+4772.290301640" watchObservedRunningTime="2026-02-14 12:01:01.929651156 +0000 UTC m=+4772.298278524" Feb 14 12:01:05 crc kubenswrapper[4736]: I0214 12:01:05.038842 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6c6f565b75-vzhbj" podUID="6c072889-cf21-4f12-a6eb-14fe8409b860" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 14 12:01:06 crc kubenswrapper[4736]: I0214 12:01:06.397412 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:01:06 crc kubenswrapper[4736]: E0214 12:01:06.398260 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:08 crc kubenswrapper[4736]: I0214 12:01:08.980652 4736 generic.go:334] "Generic (PLEG): container finished" podID="84846f34-844e-488c-934d-ef0b1563721c" containerID="cbc6020184c9420d7b2e5e1fce9cccd3d89f5b671e027a6a3c4a4a57dfddf961" exitCode=0 Feb 14 12:01:08 crc kubenswrapper[4736]: I0214 12:01:08.980727 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517841-w6gzg" event={"ID":"84846f34-844e-488c-934d-ef0b1563721c","Type":"ContainerDied","Data":"cbc6020184c9420d7b2e5e1fce9cccd3d89f5b671e027a6a3c4a4a57dfddf961"} Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.382314 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.474838 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys\") pod \"84846f34-844e-488c-934d-ef0b1563721c\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.475073 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle\") pod \"84846f34-844e-488c-934d-ef0b1563721c\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.475238 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data\") pod \"84846f34-844e-488c-934d-ef0b1563721c\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.475268 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhplt\" (UniqueName: \"kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt\") pod \"84846f34-844e-488c-934d-ef0b1563721c\" (UID: \"84846f34-844e-488c-934d-ef0b1563721c\") " Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.481737 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt" (OuterVolumeSpecName: "kube-api-access-qhplt") pod "84846f34-844e-488c-934d-ef0b1563721c" (UID: "84846f34-844e-488c-934d-ef0b1563721c"). InnerVolumeSpecName "kube-api-access-qhplt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.482862 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "84846f34-844e-488c-934d-ef0b1563721c" (UID: "84846f34-844e-488c-934d-ef0b1563721c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.577866 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhplt\" (UniqueName: \"kubernetes.io/projected/84846f34-844e-488c-934d-ef0b1563721c-kube-api-access-qhplt\") on node \"crc\" DevicePath \"\"" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.577911 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.626397 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84846f34-844e-488c-934d-ef0b1563721c" (UID: "84846f34-844e-488c-934d-ef0b1563721c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.626902 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data" (OuterVolumeSpecName: "config-data") pod "84846f34-844e-488c-934d-ef0b1563721c" (UID: "84846f34-844e-488c-934d-ef0b1563721c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.679317 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 12:01:10 crc kubenswrapper[4736]: I0214 12:01:10.679356 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84846f34-844e-488c-934d-ef0b1563721c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 12:01:11 crc kubenswrapper[4736]: I0214 12:01:11.029368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517841-w6gzg" event={"ID":"84846f34-844e-488c-934d-ef0b1563721c","Type":"ContainerDied","Data":"e961b3efe28c2852a417594060d3ec2f3456f9b0195b6fe9f17675178a0bda3f"} Feb 14 12:01:11 crc kubenswrapper[4736]: I0214 12:01:11.029699 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e961b3efe28c2852a417594060d3ec2f3456f9b0195b6fe9f17675178a0bda3f" Feb 14 12:01:11 crc kubenswrapper[4736]: I0214 12:01:11.029483 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517841-w6gzg" Feb 14 12:01:20 crc kubenswrapper[4736]: I0214 12:01:20.406507 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:01:20 crc kubenswrapper[4736]: E0214 12:01:20.407421 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:31 crc kubenswrapper[4736]: I0214 12:01:31.398228 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:01:31 crc kubenswrapper[4736]: E0214 12:01:31.402317 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:42 crc kubenswrapper[4736]: I0214 12:01:42.397648 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:01:42 crc kubenswrapper[4736]: E0214 12:01:42.398475 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.330653 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk64r/must-gather-j7qrq"] Feb 14 12:01:45 crc kubenswrapper[4736]: E0214 12:01:45.331835 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84846f34-844e-488c-934d-ef0b1563721c" containerName="keystone-cron" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.331857 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="84846f34-844e-488c-934d-ef0b1563721c" containerName="keystone-cron" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.332237 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="84846f34-844e-488c-934d-ef0b1563721c" containerName="keystone-cron" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.333724 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.336029 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk64r"/"kube-root-ca.crt" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.344962 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk64r"/"openshift-service-ca.crt" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.367992 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk64r/must-gather-j7qrq"] Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.418903 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xqxm\" (UniqueName: \"kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.418960 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.524534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xqxm\" (UniqueName: \"kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.524579 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.525013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.584297 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xqxm\" (UniqueName: \"kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm\") pod \"must-gather-j7qrq\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:45 crc kubenswrapper[4736]: I0214 12:01:45.657668 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:01:46 crc kubenswrapper[4736]: I0214 12:01:46.217472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk64r/must-gather-j7qrq"] Feb 14 12:01:46 crc kubenswrapper[4736]: I0214 12:01:46.415291 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/must-gather-j7qrq" event={"ID":"d2bef6de-2ec5-4c76-b0f6-1f00de71064e","Type":"ContainerStarted","Data":"65f166964bf8a79fbab61d99ec30c8baa1c3152f24698ff057ac3ea333a74310"} Feb 14 12:01:47 crc kubenswrapper[4736]: I0214 12:01:47.421830 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/must-gather-j7qrq" event={"ID":"d2bef6de-2ec5-4c76-b0f6-1f00de71064e","Type":"ContainerStarted","Data":"fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af"} Feb 14 12:01:47 crc kubenswrapper[4736]: I0214 12:01:47.422197 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/must-gather-j7qrq" event={"ID":"d2bef6de-2ec5-4c76-b0f6-1f00de71064e","Type":"ContainerStarted","Data":"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af"} Feb 14 12:01:50 crc kubenswrapper[4736]: I0214 12:01:50.877307 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk64r/must-gather-j7qrq" podStartSLOduration=5.877290784 podStartE2EDuration="5.877290784s" podCreationTimestamp="2026-02-14 12:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 12:01:47.453094354 +0000 UTC m=+4817.821721722" watchObservedRunningTime="2026-02-14 12:01:50.877290784 +0000 UTC m=+4821.245918152" Feb 14 12:01:50 crc kubenswrapper[4736]: I0214 12:01:50.881313 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk64r/crc-debug-s7dgm"] Feb 14 12:01:50 crc kubenswrapper[4736]: I0214 12:01:50.882403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:50 crc kubenswrapper[4736]: I0214 12:01:50.892246 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk64r"/"default-dockercfg-brmmf" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.026667 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmn76\" (UniqueName: \"kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.026759 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.127796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmn76\" (UniqueName: \"kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.127853 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.128009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.149240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmn76\" (UniqueName: \"kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76\") pod \"crc-debug-s7dgm\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.199926 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:01:51 crc kubenswrapper[4736]: W0214 12:01:51.225916 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46701fcb_54ad_4711_9635_9e96559129d1.slice/crio-0f944130c483c76a146aec86d8df7cb0112737515efd237ccfdbdea057683326 WatchSource:0}: Error finding container 0f944130c483c76a146aec86d8df7cb0112737515efd237ccfdbdea057683326: Status 404 returned error can't find the container with id 0f944130c483c76a146aec86d8df7cb0112737515efd237ccfdbdea057683326 Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.484700 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" event={"ID":"46701fcb-54ad-4711-9635-9e96559129d1","Type":"ContainerStarted","Data":"16d9f8d466dfb115b1077d33aaa726336e812acf4dd5b8d49263da774f83e0cb"} Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.485032 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" event={"ID":"46701fcb-54ad-4711-9635-9e96559129d1","Type":"ContainerStarted","Data":"0f944130c483c76a146aec86d8df7cb0112737515efd237ccfdbdea057683326"} Feb 14 12:01:51 crc kubenswrapper[4736]: I0214 12:01:51.507297 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" podStartSLOduration=1.5072771230000002 podStartE2EDuration="1.507277123s" podCreationTimestamp="2026-02-14 12:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 12:01:51.501799439 +0000 UTC m=+4821.870426817" watchObservedRunningTime="2026-02-14 12:01:51.507277123 +0000 UTC m=+4821.875904491" Feb 14 12:01:54 crc kubenswrapper[4736]: I0214 12:01:54.398599 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:01:54 crc kubenswrapper[4736]: E0214 12:01:54.399573 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.630962 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.633586 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.661707 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.773865 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.773904 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.773929 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qk8\" (UniqueName: \"kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.876335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.876772 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.876901 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qk8\" (UniqueName: \"kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.876882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.877308 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.895313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qk8\" (UniqueName: \"kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8\") pod \"community-operators-bpf7j\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:56 crc kubenswrapper[4736]: I0214 12:01:56.958895 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:01:57 crc kubenswrapper[4736]: I0214 12:01:57.620562 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:01:58 crc kubenswrapper[4736]: I0214 12:01:58.547208 4736 generic.go:334] "Generic (PLEG): container finished" podID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerID="d58a39875c2ec1f9a1c2e87c070f8069734413ca28de0e2443480d36b9e19078" exitCode=0 Feb 14 12:01:58 crc kubenswrapper[4736]: I0214 12:01:58.547257 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerDied","Data":"d58a39875c2ec1f9a1c2e87c070f8069734413ca28de0e2443480d36b9e19078"} Feb 14 12:01:58 crc kubenswrapper[4736]: I0214 12:01:58.547686 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerStarted","Data":"aa006d11ac7594d74bdaead3e22187f2397b55329783908178a2bade0983a726"} Feb 14 12:02:00 crc kubenswrapper[4736]: I0214 12:02:00.566292 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerStarted","Data":"76f0ecedf0dab58170c53e6287c1af7ae422c4be6be11bdbfd1e8a41e98b1fe0"} Feb 14 12:02:02 crc kubenswrapper[4736]: I0214 12:02:02.584040 4736 generic.go:334] "Generic (PLEG): container finished" podID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerID="76f0ecedf0dab58170c53e6287c1af7ae422c4be6be11bdbfd1e8a41e98b1fe0" exitCode=0 Feb 14 12:02:02 crc kubenswrapper[4736]: I0214 12:02:02.584249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerDied","Data":"76f0ecedf0dab58170c53e6287c1af7ae422c4be6be11bdbfd1e8a41e98b1fe0"} Feb 14 12:02:03 crc kubenswrapper[4736]: I0214 12:02:03.593943 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerStarted","Data":"eca87d51a3600a2248e8bb737ae1f53df0cbc00cbc7bcfbced3bbb3d40278dc6"} Feb 14 12:02:03 crc kubenswrapper[4736]: I0214 12:02:03.631497 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpf7j" podStartSLOduration=3.047115884 podStartE2EDuration="7.631474953s" podCreationTimestamp="2026-02-14 12:01:56 +0000 UTC" firstStartedPulling="2026-02-14 12:01:58.549921434 +0000 UTC m=+4828.918548812" lastFinishedPulling="2026-02-14 12:02:03.134280513 +0000 UTC m=+4833.502907881" observedRunningTime="2026-02-14 12:02:03.623178299 +0000 UTC m=+4833.991805667" watchObservedRunningTime="2026-02-14 12:02:03.631474953 +0000 UTC m=+4834.000102321" Feb 14 12:02:06 crc kubenswrapper[4736]: I0214 12:02:06.959659 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:06 crc kubenswrapper[4736]: I0214 12:02:06.960194 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:07 crc kubenswrapper[4736]: I0214 12:02:07.021454 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:08 crc kubenswrapper[4736]: I0214 12:02:08.397275 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:02:08 crc kubenswrapper[4736]: E0214 12:02:08.397799 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.442971 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.445717 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.462200 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.557781 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.558006 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdhq8\" (UniqueName: \"kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.558039 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.660266 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdhq8\" (UniqueName: \"kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.660641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.660700 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.661115 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.661195 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.682593 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdhq8\" (UniqueName: \"kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8\") pod \"redhat-marketplace-d6bpx\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:16 crc kubenswrapper[4736]: I0214 12:02:16.764927 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:17 crc kubenswrapper[4736]: I0214 12:02:17.076193 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:17 crc kubenswrapper[4736]: I0214 12:02:17.224495 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:17 crc kubenswrapper[4736]: W0214 12:02:17.235468 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8369007_c46d_4045_839e_093143669f14.slice/crio-9cd128f5f53485b47cf419bf0ace8aee92f55b9cfcbf215ce4b0c6beda18c9af WatchSource:0}: Error finding container 9cd128f5f53485b47cf419bf0ace8aee92f55b9cfcbf215ce4b0c6beda18c9af: Status 404 returned error can't find the container with id 9cd128f5f53485b47cf419bf0ace8aee92f55b9cfcbf215ce4b0c6beda18c9af Feb 14 12:02:17 crc kubenswrapper[4736]: I0214 12:02:17.715288 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8369007-c46d-4045-839e-093143669f14" containerID="3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b" exitCode=0 Feb 14 12:02:17 crc kubenswrapper[4736]: I0214 12:02:17.715377 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerDied","Data":"3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b"} Feb 14 12:02:17 crc kubenswrapper[4736]: I0214 12:02:17.715833 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerStarted","Data":"9cd128f5f53485b47cf419bf0ace8aee92f55b9cfcbf215ce4b0c6beda18c9af"} Feb 14 12:02:18 crc kubenswrapper[4736]: I0214 12:02:18.730117 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerStarted","Data":"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9"} Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.397398 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.400476 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.400696 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpf7j" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="registry-server" containerID="cri-o://eca87d51a3600a2248e8bb737ae1f53df0cbc00cbc7bcfbced3bbb3d40278dc6" gracePeriod=2 Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.741573 4736 generic.go:334] "Generic (PLEG): container finished" podID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerID="eca87d51a3600a2248e8bb737ae1f53df0cbc00cbc7bcfbced3bbb3d40278dc6" exitCode=0 Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.741879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerDied","Data":"eca87d51a3600a2248e8bb737ae1f53df0cbc00cbc7bcfbced3bbb3d40278dc6"} Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.743385 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8369007-c46d-4045-839e-093143669f14" containerID="8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9" exitCode=0 Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.743431 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerDied","Data":"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9"} Feb 14 12:02:19 crc kubenswrapper[4736]: I0214 12:02:19.886798 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.029088 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities\") pod \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.029304 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5qk8\" (UniqueName: \"kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8\") pod \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.029479 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content\") pod \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\" (UID: \"c2b9cc79-8784-40ca-bead-4f0e577a3a77\") " Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.029911 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities" (OuterVolumeSpecName: "utilities") pod "c2b9cc79-8784-40ca-bead-4f0e577a3a77" (UID: "c2b9cc79-8784-40ca-bead-4f0e577a3a77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.030329 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.045374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8" (OuterVolumeSpecName: "kube-api-access-c5qk8") pod "c2b9cc79-8784-40ca-bead-4f0e577a3a77" (UID: "c2b9cc79-8784-40ca-bead-4f0e577a3a77"). InnerVolumeSpecName "kube-api-access-c5qk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.091380 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2b9cc79-8784-40ca-bead-4f0e577a3a77" (UID: "c2b9cc79-8784-40ca-bead-4f0e577a3a77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.131617 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5qk8\" (UniqueName: \"kubernetes.io/projected/c2b9cc79-8784-40ca-bead-4f0e577a3a77-kube-api-access-c5qk8\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.131653 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b9cc79-8784-40ca-bead-4f0e577a3a77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.768287 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerStarted","Data":"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a"} Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.770858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2"} Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.772972 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpf7j" event={"ID":"c2b9cc79-8784-40ca-bead-4f0e577a3a77","Type":"ContainerDied","Data":"aa006d11ac7594d74bdaead3e22187f2397b55329783908178a2bade0983a726"} Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.773014 4736 scope.go:117] "RemoveContainer" containerID="eca87d51a3600a2248e8bb737ae1f53df0cbc00cbc7bcfbced3bbb3d40278dc6" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.773028 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpf7j" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.791885 4736 scope.go:117] "RemoveContainer" containerID="76f0ecedf0dab58170c53e6287c1af7ae422c4be6be11bdbfd1e8a41e98b1fe0" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.805207 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6bpx" podStartSLOduration=2.157738641 podStartE2EDuration="4.805185329s" podCreationTimestamp="2026-02-14 12:02:16 +0000 UTC" firstStartedPulling="2026-02-14 12:02:17.716751724 +0000 UTC m=+4848.085379082" lastFinishedPulling="2026-02-14 12:02:20.364198382 +0000 UTC m=+4850.732825770" observedRunningTime="2026-02-14 12:02:20.792283756 +0000 UTC m=+4851.160911144" watchObservedRunningTime="2026-02-14 12:02:20.805185329 +0000 UTC m=+4851.173812707" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.825843 4736 scope.go:117] "RemoveContainer" containerID="d58a39875c2ec1f9a1c2e87c070f8069734413ca28de0e2443480d36b9e19078" Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.870233 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:02:20 crc kubenswrapper[4736]: I0214 12:02:20.882682 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpf7j"] Feb 14 12:02:22 crc kubenswrapper[4736]: I0214 12:02:22.408632 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" path="/var/lib/kubelet/pods/c2b9cc79-8784-40ca-bead-4f0e577a3a77/volumes" Feb 14 12:02:26 crc kubenswrapper[4736]: I0214 12:02:26.765374 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:26 crc kubenswrapper[4736]: I0214 12:02:26.765986 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:26 crc kubenswrapper[4736]: I0214 12:02:26.819509 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:26 crc kubenswrapper[4736]: I0214 12:02:26.874587 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:27 crc kubenswrapper[4736]: I0214 12:02:27.804152 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:28 crc kubenswrapper[4736]: I0214 12:02:28.844328 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6bpx" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="registry-server" containerID="cri-o://e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a" gracePeriod=2 Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.299016 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.464543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdhq8\" (UniqueName: \"kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8\") pod \"a8369007-c46d-4045-839e-093143669f14\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.464665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities\") pod \"a8369007-c46d-4045-839e-093143669f14\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.464690 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content\") pod \"a8369007-c46d-4045-839e-093143669f14\" (UID: \"a8369007-c46d-4045-839e-093143669f14\") " Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.465877 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities" (OuterVolumeSpecName: "utilities") pod "a8369007-c46d-4045-839e-093143669f14" (UID: "a8369007-c46d-4045-839e-093143669f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.470842 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8" (OuterVolumeSpecName: "kube-api-access-zdhq8") pod "a8369007-c46d-4045-839e-093143669f14" (UID: "a8369007-c46d-4045-839e-093143669f14"). InnerVolumeSpecName "kube-api-access-zdhq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.473540 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdhq8\" (UniqueName: \"kubernetes.io/projected/a8369007-c46d-4045-839e-093143669f14-kube-api-access-zdhq8\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.473572 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.488139 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8369007-c46d-4045-839e-093143669f14" (UID: "a8369007-c46d-4045-839e-093143669f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.576105 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8369007-c46d-4045-839e-093143669f14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.854138 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8369007-c46d-4045-839e-093143669f14" containerID="e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a" exitCode=0 Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.854177 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerDied","Data":"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a"} Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.854202 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6bpx" event={"ID":"a8369007-c46d-4045-839e-093143669f14","Type":"ContainerDied","Data":"9cd128f5f53485b47cf419bf0ace8aee92f55b9cfcbf215ce4b0c6beda18c9af"} Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.854222 4736 scope.go:117] "RemoveContainer" containerID="e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.854215 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6bpx" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.894424 4736 scope.go:117] "RemoveContainer" containerID="8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.909394 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.917420 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6bpx"] Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.918857 4736 scope.go:117] "RemoveContainer" containerID="3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.977699 4736 scope.go:117] "RemoveContainer" containerID="e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a" Feb 14 12:02:29 crc kubenswrapper[4736]: E0214 12:02:29.978131 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a\": container with ID starting with e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a not found: ID does not exist" containerID="e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.978170 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a"} err="failed to get container status \"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a\": rpc error: code = NotFound desc = could not find container \"e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a\": container with ID starting with e04d42c6be3df4ff7836fdc35b0c2969c61f6cab2ea78ab7dd298bb67193760a not found: ID does not exist" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.978194 4736 scope.go:117] "RemoveContainer" containerID="8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9" Feb 14 12:02:29 crc kubenswrapper[4736]: E0214 12:02:29.978614 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9\": container with ID starting with 8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9 not found: ID does not exist" containerID="8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.978647 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9"} err="failed to get container status \"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9\": rpc error: code = NotFound desc = could not find container \"8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9\": container with ID starting with 8af71090a982e6802fd1884f933a8c2620765032d68a01f0a2e4b9b46d5f8bf9 not found: ID does not exist" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.978668 4736 scope.go:117] "RemoveContainer" containerID="3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b" Feb 14 12:02:29 crc kubenswrapper[4736]: E0214 12:02:29.978948 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b\": container with ID starting with 3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b not found: ID does not exist" containerID="3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b" Feb 14 12:02:29 crc kubenswrapper[4736]: I0214 12:02:29.978970 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b"} err="failed to get container status \"3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b\": rpc error: code = NotFound desc = could not find container \"3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b\": container with ID starting with 3709a7144b86763698e9acfebc96dc350bbb910745447d76c4e414b1f510279b not found: ID does not exist" Feb 14 12:02:30 crc kubenswrapper[4736]: I0214 12:02:30.411072 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8369007-c46d-4045-839e-093143669f14" path="/var/lib/kubelet/pods/a8369007-c46d-4045-839e-093143669f14/volumes" Feb 14 12:02:33 crc kubenswrapper[4736]: I0214 12:02:33.861154 4736 generic.go:334] "Generic (PLEG): container finished" podID="46701fcb-54ad-4711-9635-9e96559129d1" containerID="16d9f8d466dfb115b1077d33aaa726336e812acf4dd5b8d49263da774f83e0cb" exitCode=0 Feb 14 12:02:33 crc kubenswrapper[4736]: I0214 12:02:33.861621 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" event={"ID":"46701fcb-54ad-4711-9635-9e96559129d1","Type":"ContainerDied","Data":"16d9f8d466dfb115b1077d33aaa726336e812acf4dd5b8d49263da774f83e0cb"} Feb 14 12:02:34 crc kubenswrapper[4736]: I0214 12:02:34.979993 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.019678 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-s7dgm"] Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.029624 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-s7dgm"] Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.054803 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmn76\" (UniqueName: \"kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76\") pod \"46701fcb-54ad-4711-9635-9e96559129d1\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.054899 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host\") pod \"46701fcb-54ad-4711-9635-9e96559129d1\" (UID: \"46701fcb-54ad-4711-9635-9e96559129d1\") " Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.055148 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host" (OuterVolumeSpecName: "host") pod "46701fcb-54ad-4711-9635-9e96559129d1" (UID: "46701fcb-54ad-4711-9635-9e96559129d1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.055504 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46701fcb-54ad-4711-9635-9e96559129d1-host\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.073115 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76" (OuterVolumeSpecName: "kube-api-access-fmn76") pod "46701fcb-54ad-4711-9635-9e96559129d1" (UID: "46701fcb-54ad-4711-9635-9e96559129d1"). InnerVolumeSpecName "kube-api-access-fmn76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:02:35 crc kubenswrapper[4736]: I0214 12:02:35.157358 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmn76\" (UniqueName: \"kubernetes.io/projected/46701fcb-54ad-4711-9635-9e96559129d1-kube-api-access-fmn76\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.134013 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f944130c483c76a146aec86d8df7cb0112737515efd237ccfdbdea057683326" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.134472 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-s7dgm" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.307495 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk64r/crc-debug-qtd5b"] Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308802 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="extract-content" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308824 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="extract-content" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308842 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="extract-content" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308850 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="extract-content" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308866 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="extract-utilities" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308875 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="extract-utilities" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308901 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="extract-utilities" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308909 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="extract-utilities" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308925 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308933 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.308973 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.308981 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: E0214 12:02:36.309002 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46701fcb-54ad-4711-9635-9e96559129d1" containerName="container-00" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.309010 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="46701fcb-54ad-4711-9635-9e96559129d1" containerName="container-00" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.309228 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="46701fcb-54ad-4711-9635-9e96559129d1" containerName="container-00" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.309247 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b9cc79-8784-40ca-bead-4f0e577a3a77" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.309267 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8369007-c46d-4045-839e-093143669f14" containerName="registry-server" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.310004 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.311587 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk64r"/"default-dockercfg-brmmf" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.318629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msd2k\" (UniqueName: \"kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.318712 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.405998 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46701fcb-54ad-4711-9635-9e96559129d1" path="/var/lib/kubelet/pods/46701fcb-54ad-4711-9635-9e96559129d1/volumes" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.420419 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msd2k\" (UniqueName: \"kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.420508 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.420597 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.450940 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msd2k\" (UniqueName: \"kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k\") pod \"crc-debug-qtd5b\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:36 crc kubenswrapper[4736]: I0214 12:02:36.626838 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:37 crc kubenswrapper[4736]: I0214 12:02:37.143239 4736 generic.go:334] "Generic (PLEG): container finished" podID="5e95db86-442c-4e76-a7c3-9bc28967d3f7" containerID="16d75ce76c22bd3c36f01181eaaa8ad7139008bd6128d5d275143765bf97bc9e" exitCode=0 Feb 14 12:02:37 crc kubenswrapper[4736]: I0214 12:02:37.143381 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" event={"ID":"5e95db86-442c-4e76-a7c3-9bc28967d3f7","Type":"ContainerDied","Data":"16d75ce76c22bd3c36f01181eaaa8ad7139008bd6128d5d275143765bf97bc9e"} Feb 14 12:02:37 crc kubenswrapper[4736]: I0214 12:02:37.143904 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" event={"ID":"5e95db86-442c-4e76-a7c3-9bc28967d3f7","Type":"ContainerStarted","Data":"fc9816cab34372a06622029bd60a68b335569a8f8c03886683b6296bd6d8045d"} Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.251913 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.362736 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msd2k\" (UniqueName: \"kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k\") pod \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.362924 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host\") pod \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\" (UID: \"5e95db86-442c-4e76-a7c3-9bc28967d3f7\") " Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.363405 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host" (OuterVolumeSpecName: "host") pod "5e95db86-442c-4e76-a7c3-9bc28967d3f7" (UID: "5e95db86-442c-4e76-a7c3-9bc28967d3f7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.369962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k" (OuterVolumeSpecName: "kube-api-access-msd2k") pod "5e95db86-442c-4e76-a7c3-9bc28967d3f7" (UID: "5e95db86-442c-4e76-a7c3-9bc28967d3f7"). InnerVolumeSpecName "kube-api-access-msd2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.464990 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e95db86-442c-4e76-a7c3-9bc28967d3f7-host\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:38 crc kubenswrapper[4736]: I0214 12:02:38.465028 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msd2k\" (UniqueName: \"kubernetes.io/projected/5e95db86-442c-4e76-a7c3-9bc28967d3f7-kube-api-access-msd2k\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:39 crc kubenswrapper[4736]: I0214 12:02:39.120054 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-qtd5b"] Feb 14 12:02:39 crc kubenswrapper[4736]: I0214 12:02:39.127528 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-qtd5b"] Feb 14 12:02:39 crc kubenswrapper[4736]: I0214 12:02:39.162908 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9816cab34372a06622029bd60a68b335569a8f8c03886683b6296bd6d8045d" Feb 14 12:02:39 crc kubenswrapper[4736]: I0214 12:02:39.162959 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-qtd5b" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.570243 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e95db86-442c-4e76-a7c3-9bc28967d3f7" path="/var/lib/kubelet/pods/5e95db86-442c-4e76-a7c3-9bc28967d3f7/volumes" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.645209 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk64r/crc-debug-c579r"] Feb 14 12:02:40 crc kubenswrapper[4736]: E0214 12:02:40.645702 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e95db86-442c-4e76-a7c3-9bc28967d3f7" containerName="container-00" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.645723 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e95db86-442c-4e76-a7c3-9bc28967d3f7" containerName="container-00" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.645983 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e95db86-442c-4e76-a7c3-9bc28967d3f7" containerName="container-00" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.646919 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.651285 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk64r"/"default-dockercfg-brmmf" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.757687 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.757832 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vtt\" (UniqueName: \"kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.859266 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.859391 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25vtt\" (UniqueName: \"kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.859408 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.884127 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vtt\" (UniqueName: \"kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt\") pod \"crc-debug-c579r\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: I0214 12:02:40.961417 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:40 crc kubenswrapper[4736]: W0214 12:02:40.992348 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode927b932_6ca5_477b_8805_3e238bbfa92b.slice/crio-8effdf1db59155077d2f8a359aee3f99bc05bef308bfbec9c206e058b970fd53 WatchSource:0}: Error finding container 8effdf1db59155077d2f8a359aee3f99bc05bef308bfbec9c206e058b970fd53: Status 404 returned error can't find the container with id 8effdf1db59155077d2f8a359aee3f99bc05bef308bfbec9c206e058b970fd53 Feb 14 12:02:41 crc kubenswrapper[4736]: I0214 12:02:41.205433 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-c579r" event={"ID":"e927b932-6ca5-477b-8805-3e238bbfa92b","Type":"ContainerStarted","Data":"8effdf1db59155077d2f8a359aee3f99bc05bef308bfbec9c206e058b970fd53"} Feb 14 12:02:42 crc kubenswrapper[4736]: I0214 12:02:42.214055 4736 generic.go:334] "Generic (PLEG): container finished" podID="e927b932-6ca5-477b-8805-3e238bbfa92b" containerID="e72db468f9591e0f4ade5e7e16114fe721a4316ecabd250f80b71d6ec9b7d08b" exitCode=0 Feb 14 12:02:42 crc kubenswrapper[4736]: I0214 12:02:42.214096 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/crc-debug-c579r" event={"ID":"e927b932-6ca5-477b-8805-3e238bbfa92b","Type":"ContainerDied","Data":"e72db468f9591e0f4ade5e7e16114fe721a4316ecabd250f80b71d6ec9b7d08b"} Feb 14 12:02:42 crc kubenswrapper[4736]: I0214 12:02:42.256619 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-c579r"] Feb 14 12:02:42 crc kubenswrapper[4736]: I0214 12:02:42.266145 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk64r/crc-debug-c579r"] Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.316969 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.400887 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25vtt\" (UniqueName: \"kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt\") pod \"e927b932-6ca5-477b-8805-3e238bbfa92b\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.401037 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host\") pod \"e927b932-6ca5-477b-8805-3e238bbfa92b\" (UID: \"e927b932-6ca5-477b-8805-3e238bbfa92b\") " Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.401603 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host" (OuterVolumeSpecName: "host") pod "e927b932-6ca5-477b-8805-3e238bbfa92b" (UID: "e927b932-6ca5-477b-8805-3e238bbfa92b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.409415 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt" (OuterVolumeSpecName: "kube-api-access-25vtt") pod "e927b932-6ca5-477b-8805-3e238bbfa92b" (UID: "e927b932-6ca5-477b-8805-3e238bbfa92b"). InnerVolumeSpecName "kube-api-access-25vtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.503968 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e927b932-6ca5-477b-8805-3e238bbfa92b-host\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:43 crc kubenswrapper[4736]: I0214 12:02:43.504009 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25vtt\" (UniqueName: \"kubernetes.io/projected/e927b932-6ca5-477b-8805-3e238bbfa92b-kube-api-access-25vtt\") on node \"crc\" DevicePath \"\"" Feb 14 12:02:44 crc kubenswrapper[4736]: I0214 12:02:44.230439 4736 scope.go:117] "RemoveContainer" containerID="e72db468f9591e0f4ade5e7e16114fe721a4316ecabd250f80b71d6ec9b7d08b" Feb 14 12:02:44 crc kubenswrapper[4736]: I0214 12:02:44.230625 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/crc-debug-c579r" Feb 14 12:02:44 crc kubenswrapper[4736]: I0214 12:02:44.406391 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e927b932-6ca5-477b-8805-3e238bbfa92b" path="/var/lib/kubelet/pods/e927b932-6ca5-477b-8805-3e238bbfa92b/volumes" Feb 14 12:03:38 crc kubenswrapper[4736]: I0214 12:03:38.802254 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79d9bb575d-6pwpg_fe4bb48e-4d5f-4b38-b862-d2fe632087a8/barbican-api/0.log" Feb 14 12:03:38 crc kubenswrapper[4736]: I0214 12:03:38.879472 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79d9bb575d-6pwpg_fe4bb48e-4d5f-4b38-b862-d2fe632087a8/barbican-api-log/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.253716 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85957cbc8-r7xrw_a1af432c-5ab8-4eb5-87f0-2f9519c1004b/barbican-keystone-listener-log/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.314223 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85957cbc8-r7xrw_a1af432c-5ab8-4eb5-87f0-2f9519c1004b/barbican-keystone-listener/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.481496 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c85f78bf-qrrmn_dc8cc8f5-bfab-490d-be14-44be8090fb21/barbican-worker/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.536335 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c85f78bf-qrrmn_dc8cc8f5-bfab-490d-be14-44be8090fb21/barbican-worker-log/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.703367 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-htwjd_3bc4af51-ea9d-471b-a6d1-6330e3f48a5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.795825 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/ceilometer-central-agent/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.869245 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/ceilometer-notification-agent/0.log" Feb 14 12:03:39 crc kubenswrapper[4736]: I0214 12:03:39.979045 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/proxy-httpd/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.014907 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0e312a4e-321c-45f9-a15b-b41e8a500356/sg-core/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.142661 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32441c5d-4041-4687-b31b-fb121c4d01a7/cinder-api/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.235824 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32441c5d-4041-4687-b31b-fb121c4d01a7/cinder-api-log/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.347387 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13d207cc-8160-449b-8049-04047efb4b20/cinder-scheduler/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.416772 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_13d207cc-8160-449b-8049-04047efb4b20/probe/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.602420 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gpjhp_3b5bed78-9221-4954-b969-9a676c00a110/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.647598 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m97sp_38ba8e01-1e02-4937-aeac-badb36edee69/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:40 crc kubenswrapper[4736]: I0214 12:03:40.823920 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/init/0.log" Feb 14 12:03:41 crc kubenswrapper[4736]: I0214 12:03:41.460821 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/init/0.log" Feb 14 12:03:41 crc kubenswrapper[4736]: I0214 12:03:41.559062 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7nsfd_fdcfdd0a-6f5a-44be-862f-2329a1f0a60c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:41 crc kubenswrapper[4736]: I0214 12:03:41.684646 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-4wwj2_87fe97c7-f360-4d7b-988f-0779aa692cde/dnsmasq-dns/0.log" Feb 14 12:03:41 crc kubenswrapper[4736]: I0214 12:03:41.875820 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_31f01831-be73-46fa-815b-bc32d58fb0fd/glance-httpd/0.log" Feb 14 12:03:41 crc kubenswrapper[4736]: I0214 12:03:41.895309 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_31f01831-be73-46fa-815b-bc32d58fb0fd/glance-log/0.log" Feb 14 12:03:42 crc kubenswrapper[4736]: I0214 12:03:42.178760 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f0aa2a69-bea9-4934-9b60-209ecd22eb0a/glance-httpd/0.log" Feb 14 12:03:42 crc kubenswrapper[4736]: I0214 12:03:42.187668 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f0aa2a69-bea9-4934-9b60-209ecd22eb0a/glance-log/0.log" Feb 14 12:03:42 crc kubenswrapper[4736]: I0214 12:03:42.294331 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon/2.log" Feb 14 12:03:42 crc kubenswrapper[4736]: I0214 12:03:42.496104 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon/1.log" Feb 14 12:03:42 crc kubenswrapper[4736]: I0214 12:03:42.841952 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78d96c5d8-mfqqp_bd003c66-fc46-445a-a88a-23a7c17f9747/horizon-log/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.207857 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vxrr7_a0ab8569-328c-4ffb-89c5-d08ae95e5016/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.221424 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x9hvd_928b193b-069f-4f4b-80a6-13c347302fcf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.607033 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29517841-w6gzg_84846f34-844e-488c-934d-ef0b1563721c/keystone-cron/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.608227 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a60c0b91-8564-472a-b4a7-8bab9a773d39/kube-state-metrics/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.749569 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7677d9df65-nl5rx_6b8224f3-7e3a-4591-9efd-6f3b4c6bf8f1/keystone-api/0.log" Feb 14 12:03:43 crc kubenswrapper[4736]: I0214 12:03:43.896962 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qmxtw_9e4b30a3-64e1-4f40-b895-41ac069e85f9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:44 crc kubenswrapper[4736]: I0214 12:03:44.467092 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0a412d55-7134-4b50-b303-3174348c85fa/memcached/0.log" Feb 14 12:03:44 crc kubenswrapper[4736]: I0214 12:03:44.491635 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-qn2sm_2c3c97eb-a17e-429f-84da-df394440c78c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:44 crc kubenswrapper[4736]: I0214 12:03:44.534678 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7979c77cb9-ql2gq_8af015af-390d-4300-95e1-976c308f136c/neutron-httpd/0.log" Feb 14 12:03:44 crc kubenswrapper[4736]: I0214 12:03:44.631060 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7979c77cb9-ql2gq_8af015af-390d-4300-95e1-976c308f136c/neutron-api/0.log" Feb 14 12:03:45 crc kubenswrapper[4736]: I0214 12:03:45.316912 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2cd086e8-4f54-40fa-9f03-f2434e27ce21/nova-cell0-conductor-conductor/0.log" Feb 14 12:03:45 crc kubenswrapper[4736]: I0214 12:03:45.387587 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_22a7ad59-032e-457c-84ee-a3145f286106/nova-cell1-conductor-conductor/0.log" Feb 14 12:03:45 crc kubenswrapper[4736]: I0214 12:03:45.629854 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_56c90a64-d883-4865-a393-2a9aec8e43a8/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 12:03:45 crc kubenswrapper[4736]: I0214 12:03:45.719058 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sj8v2_e40d6c31-4f67-46cc-b2a2-991133a68003/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:45 crc kubenswrapper[4736]: I0214 12:03:45.943943 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_056456b3-9911-4a23-9322-7072e9170cbe/nova-api-log/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.036972 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_501f8e75-5b0d-4226-b3d4-3ac92c58911c/nova-metadata-log/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.390429 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_056456b3-9911-4a23-9322-7072e9170cbe/nova-api-api/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.672646 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/mysql-bootstrap/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.794877 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_394f1f5f-0af2-4451-b497-6f15295099a4/nova-scheduler-scheduler/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.910714 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/galera/0.log" Feb 14 12:03:46 crc kubenswrapper[4736]: I0214 12:03:46.925327 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3a11355-8757-409d-b440-6b1a372ddd72/mysql-bootstrap/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.137624 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/mysql-bootstrap/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.413401 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/mysql-bootstrap/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.418863 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec5ce106-52f4-4985-a2b9-99266fe3d2d9/openstackclient/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.423378 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f077df65-4b06-4908-87bb-d08572879c62/galera/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.431663 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_501f8e75-5b0d-4226-b3d4-3ac92c58911c/nova-metadata-metadata/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.653414 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cxr2b_1cc96bf2-147a-454d-8443-20e850d25ad0/openstack-network-exporter/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.700377 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-msd5j_2cce5ddc-39fe-4682-93a8-ef7aaac7a4ba/ovn-controller/0.log" Feb 14 12:03:47 crc kubenswrapper[4736]: I0214 12:03:47.830370 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server-init/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.051938 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server-init/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.080045 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovs-vswitchd/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.097023 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6dm75_4dc5a707-dee1-457c-9100-e80b9eb96f6c/ovsdb-server/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.214058 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5wgrt_c0fc1129-2e48-4afa-ad54-fce50eaaeddc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.321381 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d28847a2-993a-4124-b138-ecec67828807/openstack-network-exporter/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.379818 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d28847a2-993a-4124-b138-ecec67828807/ovn-northd/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.485592 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0705ea43-d70f-400e-ac09-07dbebf128ea/openstack-network-exporter/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.596446 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0705ea43-d70f-400e-ac09-07dbebf128ea/ovsdbserver-nb/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.702403 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18/openstack-network-exporter/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.747109 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_828f8add-3a9b-4ff0-82ef-ebb7c1b3dc18/ovsdbserver-sb/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.927024 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-557678d96b-tqmtc_ec9d0890-b994-4ada-a802-a43cbe2fc50e/placement-api/0.log" Feb 14 12:03:48 crc kubenswrapper[4736]: I0214 12:03:48.991136 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/setup-container/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.119325 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-557678d96b-tqmtc_ec9d0890-b994-4ada-a802-a43cbe2fc50e/placement-log/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.281830 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/setup-container/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.283654 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/setup-container/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.359137 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_070e414c-ea91-48aa-871d-ebfed740c5b3/rabbitmq/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.521055 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/setup-container/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.581737 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-9snhh_9cdcee64-bc6c-40ca-8db3-50335948db44/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.609971 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7b1cfbb-0f84-4915-bae6-0bd165726dba/rabbitmq/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.712276 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-75mgr_443efa04-503f-4571-b1a3-d31c88bc0a5c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.762506 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-l25sn_f0c54768-3b9b-423a-8099-0282ad3ea027/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.875887 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9xcr4_4795d395-5dcc-4284-b6ee-607b2c9a1f97/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:49 crc kubenswrapper[4736]: I0214 12:03:49.963638 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-kdd4d_a8f1507b-722e-46b0-a239-48a5100e9971/ssh-known-hosts-edpm-deployment/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.129380 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6f565b75-vzhbj_6c072889-cf21-4f12-a6eb-14fe8409b860/proxy-httpd/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.145206 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6f565b75-vzhbj_6c072889-cf21-4f12-a6eb-14fe8409b860/proxy-server/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.188427 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ccs82_02d4bbe4-e30d-4906-ac6e-c4da9f6faf9a/swift-ring-rebalance/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.504928 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-reaper/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.603060 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-auditor/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.668370 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-replicator/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.748606 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/account-server/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.797317 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-server/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.802893 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-auditor/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.812863 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-replicator/0.log" Feb 14 12:03:50 crc kubenswrapper[4736]: I0214 12:03:50.901312 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/container-updater/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.007260 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-expirer/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.021352 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-auditor/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.071030 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-replicator/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.082178 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-server/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.120826 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/object-updater/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.230319 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/swift-recon-cron/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.258338 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0283d5c8-4795-458e-8faf-c4908c75e01e/rsync/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.388410 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7dwp5_8c44413d-b97e-45f6-80d1-71f5e489c4ac/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.511552 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ab2bcae4-a5d8-471d-a031-b0e810759ab1/tempest-tests-tempest-tests-runner/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.555245 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fb072c2c-7982-4e0b-825a-1b64b951f0a7/test-operator-logs-container/0.log" Feb 14 12:03:51 crc kubenswrapper[4736]: I0214 12:03:51.667102 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pflph_e4f07dbc-dcbf-40e3-b7cd-f2292caa2f19/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.188768 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.435782 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.522143 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.533128 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.663476 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/util/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.776084 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/extract/0.log" Feb 14 12:04:25 crc kubenswrapper[4736]: I0214 12:04:25.805022 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7769d3e81e94379c2f5b315fb320e58bd8a69a5a8755a8c3c3079d6565dtptp_be6a93c1-1984-4663-b989-684667f31ec9/pull/0.log" Feb 14 12:04:26 crc kubenswrapper[4736]: I0214 12:04:26.450389 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-ddq5f_049efcc4-9d6e-47ff-8476-a29e06c6f362/manager/0.log" Feb 14 12:04:26 crc kubenswrapper[4736]: I0214 12:04:26.805581 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68fd459cc4-lwpwl_8b8b4f4d-ca75-4127-bf64-3db5839a9ccb/manager/0.log" Feb 14 12:04:27 crc kubenswrapper[4736]: I0214 12:04:27.066963 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-g7wc9_f0eae102-9c64-42bb-b7eb-64c54f3bf219/manager/0.log" Feb 14 12:04:27 crc kubenswrapper[4736]: I0214 12:04:27.318903 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-pcttg_c9abe211-c0d9-4487-856f-12a41e4ad006/manager/0.log" Feb 14 12:04:28 crc kubenswrapper[4736]: I0214 12:04:28.019982 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-mt8zx_55648e35-636d-4321-bdfe-e7171a70e87d/manager/0.log" Feb 14 12:04:28 crc kubenswrapper[4736]: I0214 12:04:28.184641 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-6wjlq_434321f7-faee-40e8-8d52-6c863d100da6/manager/0.log" Feb 14 12:04:28 crc kubenswrapper[4736]: I0214 12:04:28.522380 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-7b9sw_0e8252f4-9ac3-4a5c-9f8a-dee1091ef0c1/manager/0.log" Feb 14 12:04:28 crc kubenswrapper[4736]: I0214 12:04:28.677670 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-768c8b45bb-jbwwk_d6185be6-e012-411d-9b85-c971e12aebbd/manager/0.log" Feb 14 12:04:28 crc kubenswrapper[4736]: I0214 12:04:28.756324 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76fd76856-dmpmv_07e92003-0bdf-4e0b-a35c-d8f96e3a57f8/manager/0.log" Feb 14 12:04:29 crc kubenswrapper[4736]: I0214 12:04:29.034153 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-p2d5f_c2104410-cd10-43d8-84d1-8cd837d65ed4/manager/0.log" Feb 14 12:04:29 crc kubenswrapper[4736]: I0214 12:04:29.264503 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-ptgcj_6a8d2df6-3e2b-4120-8848-9ab5ae903da5/manager/0.log" Feb 14 12:04:29 crc kubenswrapper[4736]: I0214 12:04:29.450283 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-spx2d_fc679b24-ad26-46c8-8d9e-28ef80a48090/manager/0.log" Feb 14 12:04:29 crc kubenswrapper[4736]: I0214 12:04:29.640804 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-w2mbt_4800ac63-235a-4486-a61b-018e85369028/manager/0.log" Feb 14 12:04:30 crc kubenswrapper[4736]: I0214 12:04:30.363764 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69b468cbcf-657fg_b74fa186-0772-4d4e-abcd-b04bc6fa4751/operator/0.log" Feb 14 12:04:30 crc kubenswrapper[4736]: I0214 12:04:30.635598 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4hxtr_6d54b312-9619-450b-a6b2-980caae9860e/registry-server/0.log" Feb 14 12:04:30 crc kubenswrapper[4736]: I0214 12:04:30.945255 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-9zxzs_05c7d113-70d7-4bbf-9c0e-4981d602acd3/manager/0.log" Feb 14 12:04:31 crc kubenswrapper[4736]: I0214 12:04:31.726585 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-pqhqz_bd3596d4-d10d-45e0-b236-d0cca28bc09b/manager/0.log" Feb 14 12:04:31 crc kubenswrapper[4736]: I0214 12:04:31.973057 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4x6xz_dd5e2ee2-c48c-40fd-9a02-ce871056600f/operator/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.232141 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-9dgfx_391b46e9-4f14-4b12-9c9a-800eecfc51af/manager/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.504012 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-h52ld_332bd6ec-7fc0-4c92-bd0e-491f238a8680/manager/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.562518 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-cztvd_4b0b03c4-b031-408b-a6de-8b3af1064ebd/manager/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.685536 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-qr776_692863b5-b658-4d50-928e-b5357a279851/manager/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.763989 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f46fb7bd6-whwbk_18979fdb-9863-4a61-a6cc-5984b041d7c6/manager/0.log" Feb 14 12:04:32 crc kubenswrapper[4736]: I0214 12:04:32.785186 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c469bc6bb-2p58b_13ed197e-630c-4788-863e-23be47efe228/manager/0.log" Feb 14 12:04:37 crc kubenswrapper[4736]: I0214 12:04:37.803272 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-vg9f9_a1aa4225-909d-49ae-8ac7-d987a760f2d2/manager/0.log" Feb 14 12:04:47 crc kubenswrapper[4736]: I0214 12:04:47.695715 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:04:47 crc kubenswrapper[4736]: I0214 12:04:47.697303 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:04:56 crc kubenswrapper[4736]: I0214 12:04:56.102482 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8bxbt_b3550c81-1f31-4800-b399-4168db6f20fc/control-plane-machine-set-operator/0.log" Feb 14 12:04:56 crc kubenswrapper[4736]: I0214 12:04:56.217239 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-68p48_b305d178-1f44-4e74-9a0f-9a6c95fb4c45/kube-rbac-proxy/0.log" Feb 14 12:04:56 crc kubenswrapper[4736]: I0214 12:04:56.297168 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-68p48_b305d178-1f44-4e74-9a0f-9a6c95fb4c45/machine-api-operator/0.log" Feb 14 12:05:10 crc kubenswrapper[4736]: I0214 12:05:10.641450 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lsgkg_70c4aa44-ebfe-49e1-9e2a-d4f507794c4e/cert-manager-controller/0.log" Feb 14 12:05:10 crc kubenswrapper[4736]: I0214 12:05:10.720027 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xbtbh_c30450ff-d5e3-482b-9d67-63ac08a238e2/cert-manager-cainjector/0.log" Feb 14 12:05:10 crc kubenswrapper[4736]: I0214 12:05:10.898324 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vg8jq_d7a4fec3-20be-4ba1-838e-45d9a777ba6a/cert-manager-webhook/0.log" Feb 14 12:05:17 crc kubenswrapper[4736]: I0214 12:05:17.695886 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:05:17 crc kubenswrapper[4736]: I0214 12:05:17.697329 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.071253 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-989cb_e3223403-6c82-4af7-8a7a-902982281d8b/nmstate-console-plugin/0.log" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.265638 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tfkkw_1799375f-7713-43d7-a0b2-9c76efff7daf/nmstate-handler/0.log" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.338472 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-krg4j_e04e8849-dd0e-4a3f-98e0-8925563c7145/kube-rbac-proxy/0.log" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.410676 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-krg4j_e04e8849-dd0e-4a3f-98e0-8925563c7145/nmstate-metrics/0.log" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.590839 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-8d5p5_f9f3eda0-51a6-4de7-86d2-7b68836bcb67/nmstate-operator/0.log" Feb 14 12:05:25 crc kubenswrapper[4736]: I0214 12:05:25.638564 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-2gjv9_73130fe2-047e-44a9-986b-0734857df7a6/nmstate-webhook/0.log" Feb 14 12:05:47 crc kubenswrapper[4736]: I0214 12:05:47.695235 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:05:47 crc kubenswrapper[4736]: I0214 12:05:47.695805 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:05:47 crc kubenswrapper[4736]: I0214 12:05:47.695866 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 12:05:47 crc kubenswrapper[4736]: I0214 12:05:47.696656 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 12:05:47 crc kubenswrapper[4736]: I0214 12:05:47.696705 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2" gracePeriod=600 Feb 14 12:05:48 crc kubenswrapper[4736]: I0214 12:05:48.243854 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2" exitCode=0 Feb 14 12:05:48 crc kubenswrapper[4736]: I0214 12:05:48.243961 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2"} Feb 14 12:05:48 crc kubenswrapper[4736]: I0214 12:05:48.244514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerStarted","Data":"134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244"} Feb 14 12:05:48 crc kubenswrapper[4736]: I0214 12:05:48.244536 4736 scope.go:117] "RemoveContainer" containerID="d6f5c1754810714d22974d469115c8ec7357e4a86d61f4ec9b2bc281d8cd7380" Feb 14 12:05:55 crc kubenswrapper[4736]: I0214 12:05:55.670221 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kxmtf_d4a413eb-d17a-4f7f-bd22-d4f41f915d53/kube-rbac-proxy/0.log" Feb 14 12:05:55 crc kubenswrapper[4736]: I0214 12:05:55.776458 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kxmtf_d4a413eb-d17a-4f7f-bd22-d4f41f915d53/controller/0.log" Feb 14 12:05:55 crc kubenswrapper[4736]: I0214 12:05:55.919014 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-llcsn_78e00005-015f-402a-9308-473763478d28/frr-k8s-webhook-server/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.037786 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.211503 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.220228 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.263524 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.273468 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.426651 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.434881 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.481852 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.497516 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.660537 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-metrics/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.670070 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-reloader/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.693415 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/cp-frr-files/0.log" Feb 14 12:05:56 crc kubenswrapper[4736]: I0214 12:05:56.728154 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/controller/0.log" Feb 14 12:05:57 crc kubenswrapper[4736]: I0214 12:05:57.503952 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/frr-metrics/0.log" Feb 14 12:05:57 crc kubenswrapper[4736]: I0214 12:05:57.591348 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/kube-rbac-proxy/0.log" Feb 14 12:05:57 crc kubenswrapper[4736]: I0214 12:05:57.655537 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/kube-rbac-proxy-frr/0.log" Feb 14 12:05:57 crc kubenswrapper[4736]: I0214 12:05:57.782084 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/reloader/0.log" Feb 14 12:05:58 crc kubenswrapper[4736]: I0214 12:05:58.023355 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-575f5cbc8b-mg2p4_b64209dc-83e7-4c67-920c-0e8d9369d823/manager/0.log" Feb 14 12:05:58 crc kubenswrapper[4736]: I0214 12:05:58.085206 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69fc489c64-2rjlv_ab7820de-1649-428e-b823-d28364520352/webhook-server/0.log" Feb 14 12:05:58 crc kubenswrapper[4736]: I0214 12:05:58.197968 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wg9h9_efdeea7a-a8bb-482a-8dd5-6e8add6ed2e6/frr/0.log" Feb 14 12:05:58 crc kubenswrapper[4736]: I0214 12:05:58.316380 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tm7cx_7b23b424-9a4f-44c9-a999-7721acb1b135/kube-rbac-proxy/0.log" Feb 14 12:05:58 crc kubenswrapper[4736]: I0214 12:05:58.606569 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tm7cx_7b23b424-9a4f-44c9-a999-7721acb1b135/speaker/0.log" Feb 14 12:06:12 crc kubenswrapper[4736]: I0214 12:06:12.869132 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 12:06:12 crc kubenswrapper[4736]: I0214 12:06:12.991186 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.024673 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.042082 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.263828 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/extract/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.281625 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/util/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.330204 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cn7gj_bdf58939-65c9-4c99-9116-99f56d96754f/pull/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.468287 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.589112 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.594638 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.623509 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.820105 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-content/0.log" Feb 14 12:06:13 crc kubenswrapper[4736]: I0214 12:06:13.854356 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/extract-utilities/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.073002 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.425967 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.479014 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-94s4t_b01fa613-73d4-4246-a376-02723ee39286/registry-server/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.495791 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.543090 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.699168 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-utilities/0.log" Feb 14 12:06:14 crc kubenswrapper[4736]: I0214 12:06:14.737770 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/extract-content/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.320265 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.421384 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k4zsg_af5139d3-a470-4c13-a66a-1fcf2eb8cd7b/registry-server/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.530576 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.561273 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.589534 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.715142 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/util/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.766252 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/pull/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.787657 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecassw95_af3f932a-49f7-44eb-a953-0be84900d37a/extract/0.log" Feb 14 12:06:15 crc kubenswrapper[4736]: I0214 12:06:15.979523 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v7kg4_9b7c3a8f-a5ad-4668-9ddc-9a0f33a1eed5/marketplace-operator/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.006300 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.245519 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.248701 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.299183 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.819858 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-utilities/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.859686 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/extract-content/0.log" Feb 14 12:06:16 crc kubenswrapper[4736]: I0214 12:06:16.949623 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5lgg_42f5df5b-7b21-45af-beb2-52f4bd141bb5/registry-server/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.018383 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.250096 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.254984 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.273317 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.456826 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-utilities/0.log" Feb 14 12:06:17 crc kubenswrapper[4736]: I0214 12:06:17.491192 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/extract-content/0.log" Feb 14 12:06:18 crc kubenswrapper[4736]: I0214 12:06:18.032856 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bcbsv_aa47e46f-8ce5-4184-8167-7951842f215e/registry-server/0.log" Feb 14 12:08:17 crc kubenswrapper[4736]: I0214 12:08:17.695393 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:08:17 crc kubenswrapper[4736]: I0214 12:08:17.696222 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.490414 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:27 crc kubenswrapper[4736]: E0214 12:08:27.492028 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e927b932-6ca5-477b-8805-3e238bbfa92b" containerName="container-00" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.492050 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e927b932-6ca5-477b-8805-3e238bbfa92b" containerName="container-00" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.492249 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e927b932-6ca5-477b-8805-3e238bbfa92b" containerName="container-00" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.497010 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.500939 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.611373 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.611489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.611527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs47\" (UniqueName: \"kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.713267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.713334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs47\" (UniqueName: \"kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.713393 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.713800 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.713827 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.736436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs47\" (UniqueName: \"kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47\") pod \"certified-operators-d4r2n\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:27 crc kubenswrapper[4736]: I0214 12:08:27.815308 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:28 crc kubenswrapper[4736]: I0214 12:08:28.432263 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:29 crc kubenswrapper[4736]: I0214 12:08:29.328789 4736 generic.go:334] "Generic (PLEG): container finished" podID="7289435c-25de-42b8-8123-7369f6983fd2" containerID="a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749" exitCode=0 Feb 14 12:08:29 crc kubenswrapper[4736]: I0214 12:08:29.328870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerDied","Data":"a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749"} Feb 14 12:08:29 crc kubenswrapper[4736]: I0214 12:08:29.329115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerStarted","Data":"69b95825f1d558753c9f27bb6de619cb1bb4c949a5487e3a13a6f66adfa2940e"} Feb 14 12:08:29 crc kubenswrapper[4736]: I0214 12:08:29.330831 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 12:08:31 crc kubenswrapper[4736]: I0214 12:08:31.345321 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerStarted","Data":"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19"} Feb 14 12:08:35 crc kubenswrapper[4736]: I0214 12:08:35.382262 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerDied","Data":"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19"} Feb 14 12:08:35 crc kubenswrapper[4736]: I0214 12:08:35.382205 4736 generic.go:334] "Generic (PLEG): container finished" podID="7289435c-25de-42b8-8123-7369f6983fd2" containerID="ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19" exitCode=0 Feb 14 12:08:36 crc kubenswrapper[4736]: I0214 12:08:36.394128 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerStarted","Data":"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2"} Feb 14 12:08:36 crc kubenswrapper[4736]: I0214 12:08:36.427186 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4r2n" podStartSLOduration=2.944471646 podStartE2EDuration="9.427165616s" podCreationTimestamp="2026-02-14 12:08:27 +0000 UTC" firstStartedPulling="2026-02-14 12:08:29.330540235 +0000 UTC m=+5219.699167623" lastFinishedPulling="2026-02-14 12:08:35.813234225 +0000 UTC m=+5226.181861593" observedRunningTime="2026-02-14 12:08:36.425620002 +0000 UTC m=+5226.794247370" watchObservedRunningTime="2026-02-14 12:08:36.427165616 +0000 UTC m=+5226.795792994" Feb 14 12:08:37 crc kubenswrapper[4736]: I0214 12:08:37.815737 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:37 crc kubenswrapper[4736]: I0214 12:08:37.816135 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:37 crc kubenswrapper[4736]: I0214 12:08:37.879440 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:38 crc kubenswrapper[4736]: I0214 12:08:38.099707 4736 scope.go:117] "RemoveContainer" containerID="16d9f8d466dfb115b1077d33aaa726336e812acf4dd5b8d49263da774f83e0cb" Feb 14 12:08:38 crc kubenswrapper[4736]: I0214 12:08:38.141615 4736 scope.go:117] "RemoveContainer" containerID="16d75ce76c22bd3c36f01181eaaa8ad7139008bd6128d5d275143765bf97bc9e" Feb 14 12:08:47 crc kubenswrapper[4736]: I0214 12:08:47.695901 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:08:47 crc kubenswrapper[4736]: I0214 12:08:47.696587 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:08:47 crc kubenswrapper[4736]: I0214 12:08:47.888500 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:47 crc kubenswrapper[4736]: I0214 12:08:47.955214 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:48 crc kubenswrapper[4736]: I0214 12:08:48.513500 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4r2n" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="registry-server" containerID="cri-o://84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2" gracePeriod=2 Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.014178 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.125352 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities\") pod \"7289435c-25de-42b8-8123-7369f6983fd2\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.125465 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content\") pod \"7289435c-25de-42b8-8123-7369f6983fd2\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.125701 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgs47\" (UniqueName: \"kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47\") pod \"7289435c-25de-42b8-8123-7369f6983fd2\" (UID: \"7289435c-25de-42b8-8123-7369f6983fd2\") " Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.126246 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities" (OuterVolumeSpecName: "utilities") pod "7289435c-25de-42b8-8123-7369f6983fd2" (UID: "7289435c-25de-42b8-8123-7369f6983fd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.127141 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.150307 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47" (OuterVolumeSpecName: "kube-api-access-rgs47") pod "7289435c-25de-42b8-8123-7369f6983fd2" (UID: "7289435c-25de-42b8-8123-7369f6983fd2"). InnerVolumeSpecName "kube-api-access-rgs47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.195214 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7289435c-25de-42b8-8123-7369f6983fd2" (UID: "7289435c-25de-42b8-8123-7369f6983fd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.229038 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgs47\" (UniqueName: \"kubernetes.io/projected/7289435c-25de-42b8-8123-7369f6983fd2-kube-api-access-rgs47\") on node \"crc\" DevicePath \"\"" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.229081 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7289435c-25de-42b8-8123-7369f6983fd2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.523601 4736 generic.go:334] "Generic (PLEG): container finished" podID="7289435c-25de-42b8-8123-7369f6983fd2" containerID="84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2" exitCode=0 Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.523651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerDied","Data":"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2"} Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.523686 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4r2n" event={"ID":"7289435c-25de-42b8-8123-7369f6983fd2","Type":"ContainerDied","Data":"69b95825f1d558753c9f27bb6de619cb1bb4c949a5487e3a13a6f66adfa2940e"} Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.523740 4736 scope.go:117] "RemoveContainer" containerID="84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.523995 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4r2n" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.591255 4736 scope.go:117] "RemoveContainer" containerID="ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.601859 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.611706 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4r2n"] Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.630932 4736 scope.go:117] "RemoveContainer" containerID="a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.668349 4736 scope.go:117] "RemoveContainer" containerID="84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2" Feb 14 12:08:49 crc kubenswrapper[4736]: E0214 12:08:49.668959 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2\": container with ID starting with 84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2 not found: ID does not exist" containerID="84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.668997 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2"} err="failed to get container status \"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2\": rpc error: code = NotFound desc = could not find container \"84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2\": container with ID starting with 84bdab0f335eb3f24bc8a5f18e338839219bfafb05ac3acb81541ebcb389e1a2 not found: ID does not exist" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.669044 4736 scope.go:117] "RemoveContainer" containerID="ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19" Feb 14 12:08:49 crc kubenswrapper[4736]: E0214 12:08:49.669307 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19\": container with ID starting with ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19 not found: ID does not exist" containerID="ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.669336 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19"} err="failed to get container status \"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19\": rpc error: code = NotFound desc = could not find container \"ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19\": container with ID starting with ffa265153d365e84a3e7dffd556c6c96483eae78396a9ebcebf06c75b0e04f19 not found: ID does not exist" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.669355 4736 scope.go:117] "RemoveContainer" containerID="a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749" Feb 14 12:08:49 crc kubenswrapper[4736]: E0214 12:08:49.669707 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749\": container with ID starting with a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749 not found: ID does not exist" containerID="a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749" Feb 14 12:08:49 crc kubenswrapper[4736]: I0214 12:08:49.669732 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749"} err="failed to get container status \"a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749\": rpc error: code = NotFound desc = could not find container \"a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749\": container with ID starting with a53173646ab7800d4722e7037e11193946d06971b172a907a5ccf1400c7d9749 not found: ID does not exist" Feb 14 12:08:50 crc kubenswrapper[4736]: I0214 12:08:50.416638 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7289435c-25de-42b8-8123-7369f6983fd2" path="/var/lib/kubelet/pods/7289435c-25de-42b8-8123-7369f6983fd2/volumes" Feb 14 12:08:56 crc kubenswrapper[4736]: I0214 12:08:56.607582 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerID="9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af" exitCode=0 Feb 14 12:08:56 crc kubenswrapper[4736]: I0214 12:08:56.608075 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk64r/must-gather-j7qrq" event={"ID":"d2bef6de-2ec5-4c76-b0f6-1f00de71064e","Type":"ContainerDied","Data":"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af"} Feb 14 12:08:56 crc kubenswrapper[4736]: I0214 12:08:56.609115 4736 scope.go:117] "RemoveContainer" containerID="9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af" Feb 14 12:08:57 crc kubenswrapper[4736]: I0214 12:08:57.607236 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk64r_must-gather-j7qrq_d2bef6de-2ec5-4c76-b0f6-1f00de71064e/gather/0.log" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.142140 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk64r/must-gather-j7qrq"] Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.142824 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rk64r/must-gather-j7qrq" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="copy" containerID="cri-o://fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af" gracePeriod=2 Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.153713 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk64r/must-gather-j7qrq"] Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.723960 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk64r_must-gather-j7qrq_d2bef6de-2ec5-4c76-b0f6-1f00de71064e/copy/0.log" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.724443 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.765639 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk64r_must-gather-j7qrq_d2bef6de-2ec5-4c76-b0f6-1f00de71064e/copy/0.log" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.766077 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerID="fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af" exitCode=143 Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.766138 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk64r/must-gather-j7qrq" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.766180 4736 scope.go:117] "RemoveContainer" containerID="fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.787101 4736 scope.go:117] "RemoveContainer" containerID="9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.836150 4736 scope.go:117] "RemoveContainer" containerID="fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af" Feb 14 12:09:12 crc kubenswrapper[4736]: E0214 12:09:12.836557 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af\": container with ID starting with fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af not found: ID does not exist" containerID="fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.836593 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af"} err="failed to get container status \"fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af\": rpc error: code = NotFound desc = could not find container \"fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af\": container with ID starting with fe926649965cfedf0662b935bac03c02344f81e6031927dc81b12e2046c725af not found: ID does not exist" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.836625 4736 scope.go:117] "RemoveContainer" containerID="9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af" Feb 14 12:09:12 crc kubenswrapper[4736]: E0214 12:09:12.836846 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af\": container with ID starting with 9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af not found: ID does not exist" containerID="9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.836869 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af"} err="failed to get container status \"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af\": rpc error: code = NotFound desc = could not find container \"9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af\": container with ID starting with 9ad801cf877f89028a79d14edfc43b2f6bc2af746ab8ffbdefafe2fb41dde8af not found: ID does not exist" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.846914 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xqxm\" (UniqueName: \"kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm\") pod \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.847167 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output\") pod \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\" (UID: \"d2bef6de-2ec5-4c76-b0f6-1f00de71064e\") " Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.867572 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm" (OuterVolumeSpecName: "kube-api-access-4xqxm") pod "d2bef6de-2ec5-4c76-b0f6-1f00de71064e" (UID: "d2bef6de-2ec5-4c76-b0f6-1f00de71064e"). InnerVolumeSpecName "kube-api-access-4xqxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:09:12 crc kubenswrapper[4736]: I0214 12:09:12.950149 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xqxm\" (UniqueName: \"kubernetes.io/projected/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-kube-api-access-4xqxm\") on node \"crc\" DevicePath \"\"" Feb 14 12:09:13 crc kubenswrapper[4736]: I0214 12:09:13.055647 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d2bef6de-2ec5-4c76-b0f6-1f00de71064e" (UID: "d2bef6de-2ec5-4c76-b0f6-1f00de71064e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:09:13 crc kubenswrapper[4736]: I0214 12:09:13.154113 4736 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d2bef6de-2ec5-4c76-b0f6-1f00de71064e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 12:09:14 crc kubenswrapper[4736]: I0214 12:09:14.413417 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" path="/var/lib/kubelet/pods/d2bef6de-2ec5-4c76-b0f6-1f00de71064e/volumes" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.695793 4736 patch_prober.go:28] interesting pod/machine-config-daemon-2bpbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.696181 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.696232 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.697016 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244"} pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.697073 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerName="machine-config-daemon" containerID="cri-o://134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" gracePeriod=600 Feb 14 12:09:17 crc kubenswrapper[4736]: E0214 12:09:17.819920 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.836379 4736 generic.go:334] "Generic (PLEG): container finished" podID="22bfc94a-170b-47f5-bc6b-c6e77720371d" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" exitCode=0 Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.836415 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" event={"ID":"22bfc94a-170b-47f5-bc6b-c6e77720371d","Type":"ContainerDied","Data":"134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244"} Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.836443 4736 scope.go:117] "RemoveContainer" containerID="4e51bb2e7093fda381a5b8207db58bde0af705d17c7b2fdd241938e3efe166d2" Feb 14 12:09:17 crc kubenswrapper[4736]: I0214 12:09:17.837046 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:09:17 crc kubenswrapper[4736]: E0214 12:09:17.837269 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:09:31 crc kubenswrapper[4736]: I0214 12:09:31.397651 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:09:31 crc kubenswrapper[4736]: E0214 12:09:31.398585 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:09:45 crc kubenswrapper[4736]: I0214 12:09:45.398257 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:09:45 crc kubenswrapper[4736]: E0214 12:09:45.399402 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.082602 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:09:53 crc kubenswrapper[4736]: E0214 12:09:53.083733 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="registry-server" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.083841 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="registry-server" Feb 14 12:09:53 crc kubenswrapper[4736]: E0214 12:09:53.083856 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="gather" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.083863 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="gather" Feb 14 12:09:53 crc kubenswrapper[4736]: E0214 12:09:53.083884 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="extract-content" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.083893 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="extract-content" Feb 14 12:09:53 crc kubenswrapper[4736]: E0214 12:09:53.083919 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="extract-utilities" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.083926 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="extract-utilities" Feb 14 12:09:53 crc kubenswrapper[4736]: E0214 12:09:53.083939 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="copy" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.083946 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="copy" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.084137 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="copy" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.084165 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bef6de-2ec5-4c76-b0f6-1f00de71064e" containerName="gather" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.084175 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289435c-25de-42b8-8123-7369f6983fd2" containerName="registry-server" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.085825 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.099956 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.185470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ssd\" (UniqueName: \"kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.185603 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.185650 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.287915 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.288236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.288303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ssd\" (UniqueName: \"kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.289383 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.289864 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.317112 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ssd\" (UniqueName: \"kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd\") pod \"redhat-operators-85vpn\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.408122 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:09:53 crc kubenswrapper[4736]: I0214 12:09:53.930103 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:09:54 crc kubenswrapper[4736]: I0214 12:09:54.186586 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerStarted","Data":"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74"} Feb 14 12:09:54 crc kubenswrapper[4736]: I0214 12:09:54.186862 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerStarted","Data":"27f1da71775d27cef609eb8bb4c6532df352d0d31cf39c9a751540c2bff74078"} Feb 14 12:09:55 crc kubenswrapper[4736]: I0214 12:09:55.198099 4736 generic.go:334] "Generic (PLEG): container finished" podID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerID="aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74" exitCode=0 Feb 14 12:09:55 crc kubenswrapper[4736]: I0214 12:09:55.198137 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerDied","Data":"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74"} Feb 14 12:09:56 crc kubenswrapper[4736]: I0214 12:09:56.208497 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerStarted","Data":"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a"} Feb 14 12:09:58 crc kubenswrapper[4736]: I0214 12:09:58.397132 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:09:58 crc kubenswrapper[4736]: E0214 12:09:58.397700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:10:09 crc kubenswrapper[4736]: I0214 12:10:09.398159 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:10:09 crc kubenswrapper[4736]: E0214 12:10:09.400218 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:10:10 crc kubenswrapper[4736]: I0214 12:10:10.326984 4736 generic.go:334] "Generic (PLEG): container finished" podID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerID="8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a" exitCode=0 Feb 14 12:10:10 crc kubenswrapper[4736]: I0214 12:10:10.327071 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerDied","Data":"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a"} Feb 14 12:10:17 crc kubenswrapper[4736]: I0214 12:10:17.396528 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerStarted","Data":"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3"} Feb 14 12:10:17 crc kubenswrapper[4736]: I0214 12:10:17.445379 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-85vpn" podStartSLOduration=2.82547238 podStartE2EDuration="24.445323081s" podCreationTimestamp="2026-02-14 12:09:53 +0000 UTC" firstStartedPulling="2026-02-14 12:09:55.201480054 +0000 UTC m=+5305.570107422" lastFinishedPulling="2026-02-14 12:10:16.821330745 +0000 UTC m=+5327.189958123" observedRunningTime="2026-02-14 12:10:17.430770758 +0000 UTC m=+5327.799398146" watchObservedRunningTime="2026-02-14 12:10:17.445323081 +0000 UTC m=+5327.813950449" Feb 14 12:10:23 crc kubenswrapper[4736]: I0214 12:10:23.408535 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:10:23 crc kubenswrapper[4736]: I0214 12:10:23.409167 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:10:24 crc kubenswrapper[4736]: I0214 12:10:24.397626 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:10:24 crc kubenswrapper[4736]: E0214 12:10:24.398971 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:10:24 crc kubenswrapper[4736]: I0214 12:10:24.492257 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85vpn" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" probeResult="failure" output=< Feb 14 12:10:24 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 12:10:24 crc kubenswrapper[4736]: > Feb 14 12:10:34 crc kubenswrapper[4736]: I0214 12:10:34.450418 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85vpn" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" probeResult="failure" output=< Feb 14 12:10:34 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 12:10:34 crc kubenswrapper[4736]: > Feb 14 12:10:39 crc kubenswrapper[4736]: I0214 12:10:39.397940 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:10:39 crc kubenswrapper[4736]: E0214 12:10:39.398866 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:10:44 crc kubenswrapper[4736]: I0214 12:10:44.482627 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85vpn" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" probeResult="failure" output=< Feb 14 12:10:44 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 12:10:44 crc kubenswrapper[4736]: > Feb 14 12:10:51 crc kubenswrapper[4736]: I0214 12:10:51.398504 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:10:51 crc kubenswrapper[4736]: E0214 12:10:51.399326 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:10:54 crc kubenswrapper[4736]: I0214 12:10:54.451254 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85vpn" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" probeResult="failure" output=< Feb 14 12:10:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Feb 14 12:10:54 crc kubenswrapper[4736]: > Feb 14 12:11:02 crc kubenswrapper[4736]: I0214 12:11:02.397827 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:11:02 crc kubenswrapper[4736]: E0214 12:11:02.398818 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:11:03 crc kubenswrapper[4736]: I0214 12:11:03.467626 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:11:03 crc kubenswrapper[4736]: I0214 12:11:03.530161 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:11:03 crc kubenswrapper[4736]: I0214 12:11:03.722460 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:11:04 crc kubenswrapper[4736]: I0214 12:11:04.902651 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-85vpn" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" containerID="cri-o://5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3" gracePeriod=2 Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.393384 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.437584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4ssd\" (UniqueName: \"kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd\") pod \"aaeaddbe-f453-4626-b10a-91a8db949b00\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.438018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities\") pod \"aaeaddbe-f453-4626-b10a-91a8db949b00\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.438068 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content\") pod \"aaeaddbe-f453-4626-b10a-91a8db949b00\" (UID: \"aaeaddbe-f453-4626-b10a-91a8db949b00\") " Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.438534 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities" (OuterVolumeSpecName: "utilities") pod "aaeaddbe-f453-4626-b10a-91a8db949b00" (UID: "aaeaddbe-f453-4626-b10a-91a8db949b00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.438875 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.443336 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd" (OuterVolumeSpecName: "kube-api-access-s4ssd") pod "aaeaddbe-f453-4626-b10a-91a8db949b00" (UID: "aaeaddbe-f453-4626-b10a-91a8db949b00"). InnerVolumeSpecName "kube-api-access-s4ssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.541488 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4ssd\" (UniqueName: \"kubernetes.io/projected/aaeaddbe-f453-4626-b10a-91a8db949b00-kube-api-access-s4ssd\") on node \"crc\" DevicePath \"\"" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.579820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaeaddbe-f453-4626-b10a-91a8db949b00" (UID: "aaeaddbe-f453-4626-b10a-91a8db949b00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.642633 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaeaddbe-f453-4626-b10a-91a8db949b00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.915551 4736 generic.go:334] "Generic (PLEG): container finished" podID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerID="5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3" exitCode=0 Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.915598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerDied","Data":"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3"} Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.915626 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85vpn" event={"ID":"aaeaddbe-f453-4626-b10a-91a8db949b00","Type":"ContainerDied","Data":"27f1da71775d27cef609eb8bb4c6532df352d0d31cf39c9a751540c2bff74078"} Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.915644 4736 scope.go:117] "RemoveContainer" containerID="5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.915660 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85vpn" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.942798 4736 scope.go:117] "RemoveContainer" containerID="8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a" Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.964011 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.972959 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-85vpn"] Feb 14 12:11:05 crc kubenswrapper[4736]: I0214 12:11:05.979273 4736 scope.go:117] "RemoveContainer" containerID="aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.015058 4736 scope.go:117] "RemoveContainer" containerID="5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3" Feb 14 12:11:06 crc kubenswrapper[4736]: E0214 12:11:06.015646 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3\": container with ID starting with 5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3 not found: ID does not exist" containerID="5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.015699 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3"} err="failed to get container status \"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3\": rpc error: code = NotFound desc = could not find container \"5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3\": container with ID starting with 5a03eca743d25b5d6042dc2103a79240be116d96c48198890f2565b9bedbced3 not found: ID does not exist" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.015898 4736 scope.go:117] "RemoveContainer" containerID="8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a" Feb 14 12:11:06 crc kubenswrapper[4736]: E0214 12:11:06.016363 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a\": container with ID starting with 8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a not found: ID does not exist" containerID="8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.016396 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a"} err="failed to get container status \"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a\": rpc error: code = NotFound desc = could not find container \"8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a\": container with ID starting with 8d84a5220025f894ff0212672c6bc5c54b6ce384658d30b543dba34a048fba3a not found: ID does not exist" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.016417 4736 scope.go:117] "RemoveContainer" containerID="aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74" Feb 14 12:11:06 crc kubenswrapper[4736]: E0214 12:11:06.016966 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74\": container with ID starting with aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74 not found: ID does not exist" containerID="aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.017068 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74"} err="failed to get container status \"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74\": rpc error: code = NotFound desc = could not find container \"aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74\": container with ID starting with aa7b991a7034bb44a0e7f866e2672394a6841fe15340cb4954dd70394bac3c74 not found: ID does not exist" Feb 14 12:11:06 crc kubenswrapper[4736]: I0214 12:11:06.418924 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" path="/var/lib/kubelet/pods/aaeaddbe-f453-4626-b10a-91a8db949b00/volumes" Feb 14 12:11:16 crc kubenswrapper[4736]: I0214 12:11:16.398817 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:11:16 crc kubenswrapper[4736]: E0214 12:11:16.399670 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:11:30 crc kubenswrapper[4736]: I0214 12:11:30.404920 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:11:30 crc kubenswrapper[4736]: E0214 12:11:30.405728 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:11:45 crc kubenswrapper[4736]: I0214 12:11:45.398507 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:11:45 crc kubenswrapper[4736]: E0214 12:11:45.399429 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:12:00 crc kubenswrapper[4736]: I0214 12:12:00.406442 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:12:00 crc kubenswrapper[4736]: E0214 12:12:00.407179 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:12:11 crc kubenswrapper[4736]: I0214 12:12:11.397696 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:12:11 crc kubenswrapper[4736]: E0214 12:12:11.398435 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:12:24 crc kubenswrapper[4736]: I0214 12:12:24.400063 4736 scope.go:117] "RemoveContainer" containerID="134c66f4a203f7c42dced8b09454cef44db29d797213062acf5eb585df290244" Feb 14 12:12:24 crc kubenswrapper[4736]: E0214 12:12:24.401126 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2bpbj_openshift-machine-config-operator(22bfc94a-170b-47f5-bc6b-c6e77720371d)\"" pod="openshift-machine-config-operator/machine-config-daemon-2bpbj" podUID="22bfc94a-170b-47f5-bc6b-c6e77720371d" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.827100 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmblz"] Feb 14 12:12:25 crc kubenswrapper[4736]: E0214 12:12:25.829983 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="extract-content" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.830011 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="extract-content" Feb 14 12:12:25 crc kubenswrapper[4736]: E0214 12:12:25.830030 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="extract-utilities" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.830039 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="extract-utilities" Feb 14 12:12:25 crc kubenswrapper[4736]: E0214 12:12:25.830066 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.830073 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.830260 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaeaddbe-f453-4626-b10a-91a8db949b00" containerName="registry-server" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.831756 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.850913 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmblz"] Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.859896 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-catalog-content\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.859938 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-utilities\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.859955 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd75t\" (UniqueName: \"kubernetes.io/projected/cad2990a-0b68-4459-aa6c-a0c478e43b2c-kube-api-access-xd75t\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.962926 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-catalog-content\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.962988 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-utilities\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.963016 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd75t\" (UniqueName: \"kubernetes.io/projected/cad2990a-0b68-4459-aa6c-a0c478e43b2c-kube-api-access-xd75t\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.963558 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-utilities\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.963572 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2990a-0b68-4459-aa6c-a0c478e43b2c-catalog-content\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:25 crc kubenswrapper[4736]: I0214 12:12:25.983789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd75t\" (UniqueName: \"kubernetes.io/projected/cad2990a-0b68-4459-aa6c-a0c478e43b2c-kube-api-access-xd75t\") pod \"redhat-marketplace-xmblz\" (UID: \"cad2990a-0b68-4459-aa6c-a0c478e43b2c\") " pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:26 crc kubenswrapper[4736]: I0214 12:12:26.159683 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmblz" Feb 14 12:12:26 crc kubenswrapper[4736]: I0214 12:12:26.664759 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmblz"] Feb 14 12:12:26 crc kubenswrapper[4736]: I0214 12:12:26.787586 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmblz" event={"ID":"cad2990a-0b68-4459-aa6c-a0c478e43b2c","Type":"ContainerStarted","Data":"dbc3f81f23ceaf0321f0ed9e473a8f8b2618dcc3149c2721235f542905c06022"} Feb 14 12:12:27 crc kubenswrapper[4736]: I0214 12:12:27.805032 4736 generic.go:334] "Generic (PLEG): container finished" podID="cad2990a-0b68-4459-aa6c-a0c478e43b2c" containerID="aa7b365282f0c6c24f49fd830784981b3443293a55b06f498e5b80e34a6e425b" exitCode=0 Feb 14 12:12:27 crc kubenswrapper[4736]: I0214 12:12:27.806782 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmblz" event={"ID":"cad2990a-0b68-4459-aa6c-a0c478e43b2c","Type":"ContainerDied","Data":"aa7b365282f0c6c24f49fd830784981b3443293a55b06f498e5b80e34a6e425b"} Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.226361 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54p2d"] Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.231120 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.250487 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54p2d"] Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.361801 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-catalog-content\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.361873 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-utilities\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.361967 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5m2\" (UniqueName: \"kubernetes.io/projected/12685166-e63e-4546-bc41-0201c9f08a20-kube-api-access-tt5m2\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.463404 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-catalog-content\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.463449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-utilities\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.463482 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5m2\" (UniqueName: \"kubernetes.io/projected/12685166-e63e-4546-bc41-0201c9f08a20-kube-api-access-tt5m2\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.463963 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-catalog-content\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.464009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12685166-e63e-4546-bc41-0201c9f08a20-utilities\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.484712 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5m2\" (UniqueName: \"kubernetes.io/projected/12685166-e63e-4546-bc41-0201c9f08a20-kube-api-access-tt5m2\") pod \"community-operators-54p2d\" (UID: \"12685166-e63e-4546-bc41-0201c9f08a20\") " pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.565998 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54p2d" Feb 14 12:12:28 crc kubenswrapper[4736]: I0214 12:12:28.838715 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmblz" event={"ID":"cad2990a-0b68-4459-aa6c-a0c478e43b2c","Type":"ContainerStarted","Data":"df474a301cf3bd2b6529209e9d5c8cbf2a7bfc62ba68526dcb3221e5a88cc093"} Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.201394 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54p2d"] Feb 14 12:12:29 crc kubenswrapper[4736]: W0214 12:12:29.214110 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12685166_e63e_4546_bc41_0201c9f08a20.slice/crio-833ad9c1aecbffc53a4ea2c8e74bbb78f57548124ea8704e391075437438dfbf WatchSource:0}: Error finding container 833ad9c1aecbffc53a4ea2c8e74bbb78f57548124ea8704e391075437438dfbf: Status 404 returned error can't find the container with id 833ad9c1aecbffc53a4ea2c8e74bbb78f57548124ea8704e391075437438dfbf Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.854182 4736 generic.go:334] "Generic (PLEG): container finished" podID="cad2990a-0b68-4459-aa6c-a0c478e43b2c" containerID="df474a301cf3bd2b6529209e9d5c8cbf2a7bfc62ba68526dcb3221e5a88cc093" exitCode=0 Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.855047 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmblz" event={"ID":"cad2990a-0b68-4459-aa6c-a0c478e43b2c","Type":"ContainerDied","Data":"df474a301cf3bd2b6529209e9d5c8cbf2a7bfc62ba68526dcb3221e5a88cc093"} Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.861980 4736 generic.go:334] "Generic (PLEG): container finished" podID="12685166-e63e-4546-bc41-0201c9f08a20" containerID="5a9888c1effcb878efd1808a0be1c2fe4742da9af2b4811a36ada2f55f4f011e" exitCode=0 Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.862035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54p2d" event={"ID":"12685166-e63e-4546-bc41-0201c9f08a20","Type":"ContainerDied","Data":"5a9888c1effcb878efd1808a0be1c2fe4742da9af2b4811a36ada2f55f4f011e"} Feb 14 12:12:29 crc kubenswrapper[4736]: I0214 12:12:29.862071 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54p2d" event={"ID":"12685166-e63e-4546-bc41-0201c9f08a20","Type":"ContainerStarted","Data":"833ad9c1aecbffc53a4ea2c8e74bbb78f57548124ea8704e391075437438dfbf"} Feb 14 12:12:30 crc kubenswrapper[4736]: I0214 12:12:30.872658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmblz" event={"ID":"cad2990a-0b68-4459-aa6c-a0c478e43b2c","Type":"ContainerStarted","Data":"a739b4605ecca520d5df0c60d4122e65343e4ae4a1996008ced0d1dcd12cf91d"} Feb 14 12:12:30 crc kubenswrapper[4736]: I0214 12:12:30.875940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54p2d" event={"ID":"12685166-e63e-4546-bc41-0201c9f08a20","Type":"ContainerStarted","Data":"5c2f7b1030c78aa31eaa36e52ff8d80e381bd47fca0819594d82ad04b2593fe5"} Feb 14 12:12:30 crc kubenswrapper[4736]: I0214 12:12:30.900699 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmblz" podStartSLOduration=3.406814576 podStartE2EDuration="5.900680879s" podCreationTimestamp="2026-02-14 12:12:25 +0000 UTC" firstStartedPulling="2026-02-14 12:12:27.809308012 +0000 UTC m=+5458.177935390" lastFinishedPulling="2026-02-14 12:12:30.303174325 +0000 UTC m=+5460.671801693" observedRunningTime="2026-02-14 12:12:30.895969016 +0000 UTC m=+5461.264596394" watchObservedRunningTime="2026-02-14 12:12:30.900680879 +0000 UTC m=+5461.269308247" Feb 14 12:12:33 crc kubenswrapper[4736]: I0214 12:12:33.910274 4736 generic.go:334] "Generic (PLEG): container finished" podID="12685166-e63e-4546-bc41-0201c9f08a20" containerID="5c2f7b1030c78aa31eaa36e52ff8d80e381bd47fca0819594d82ad04b2593fe5" exitCode=0 Feb 14 12:12:33 crc kubenswrapper[4736]: I0214 12:12:33.910333 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54p2d" event={"ID":"12685166-e63e-4546-bc41-0201c9f08a20","Type":"ContainerDied","Data":"5c2f7b1030c78aa31eaa36e52ff8d80e381bd47fca0819594d82ad04b2593fe5"} Feb 14 12:12:34 crc kubenswrapper[4736]: I0214 12:12:34.923175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54p2d" event={"ID":"12685166-e63e-4546-bc41-0201c9f08a20","Type":"ContainerStarted","Data":"ec3965bb43e05a25abe702e7394af099d064277124e27ddcb03fa85ba2496d70"} Feb 14 12:12:34 crc kubenswrapper[4736]: I0214 12:12:34.958453 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54p2d" podStartSLOduration=2.537054027 podStartE2EDuration="6.958423712s" podCreationTimestamp="2026-02-14 12:12:28 +0000 UTC" firstStartedPulling="2026-02-14 12:12:29.863328941 +0000 UTC m=+5460.231956319" lastFinishedPulling="2026-02-14 12:12:34.284698636 +0000 UTC m=+5464.653326004" observedRunningTime="2026-02-14 12:12:34.941851102 +0000 UTC m=+5465.310478480" watchObservedRunningTime="2026-02-14 12:12:34.958423712 +0000 UTC m=+5465.327051150"